Science.gov

Sample records for partially parallel acquisitions

  1. Comparison of parallel acquisition techniques generalized autocalibrating partially parallel acquisitions (GRAPPA) and modified sensitivity encoding (mSENSE) in functional MRI (fMRI) at 3T.

    PubMed

    Preibisch, Christine; Wallenhorst, Tim; Heidemann, Robin; Zanella, Friedhelm E; Lanfermann, Heinrich

    2008-03-01

    To evaluate the parallel acquisition techniques, generalized autocalibrating partially parallel acquisitions (GRAPPA) and modified sensitivity encoding (mSENSE), and determine imaging parameters maximizing sensitivity toward functional activation at 3T. A total of eight imaging protocols with different parallel imaging techniques (GRAPPA and mSENSE) and reduction factors (R = 1, 2, 3) were compared at different matrix sizes (64 and 128) with respect to temporal noise characteristics, artifact behavior, and sensitivity toward functional activation. Echo planar imaging (EPI) with GRAPPA and a reduction factor of 2 revealed similar image quality and sensitivity than full k-space EPI. A higher incidence of artifacts and a marked sensitivity loss occurred at R = 3. Even though the same eight-channel head coil was used for signal detection in all experiments, GRAPPA generally showed more benign patterns of spatially-varying noise amplification, and mSENSE was also more susceptible to residual unfolding artifacts than GRAPPA. At 3T and a reduction factor of 2, parallel imaging can be used with only little penalty with regard to sensitivity. With our implementation and coil setup the performance of GRAPPA was clearly superior to mSENSE. Thus, it seems advisable to pay special attention to the employed parallel imaging method and its implementation.

  2. Quantitative high-resolution renal perfusion imaging using 3-dimensional through-time radial generalized autocalibrating partially parallel acquisition.

    PubMed

    Wright, Katherine L; Chen, Yong; Saybasili, Haris; Griswold, Mark A; Seiberlich, Nicole; Gulani, Vikas

    2014-10-01

    Dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) examinations of the kidneys provide quantitative information on renal perfusion and filtration. However, these examinations are often difficult to implement because of respiratory motion and their need for a high spatiotemporal resolution and 3-dimensional coverage. Here, we present a free-breathing quantitative renal DCE-MRI examination acquired with a highly accelerated stack-of-stars trajectory and reconstructed with 3-dimensional (3D) through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA), using half and quarter doses of gadolinium contrast. Data were acquired in 10 asymptomatic volunteers using a stack-of-stars trajectory that was undersampled in-plane by a factor of 12.6 with respect to Nyquist sampling criterion and using partial Fourier of 6/8 in the partition direction. Data had a high temporal (2.1-2.9 seconds per frame) and spatial (approximately 2.2 mm) resolution with full 3D coverage of both kidneys (350-370 mm × 79-92 mm). Images were successfully reconstructed with 3D through-time radial GRAPPA, and interframe respiratory motion was compensated by using an algorithm developed to automatically use images from multiple points of enhancement as references for registration. Quantitative pharmacokinetic analysis was performed using a separable dual-compartment model. Region-of-interest (ROI) pharmacokinetic analysis provided estimates (mean (SD)) of quantitative renal parameters after a half dose: 218.1 (57.1) mL/min per 100 mL; plasma mean transit time, 4.8 (2.2) seconds; renal filtration, 28.7 (10.0) mL/min per 100 mL; and tubular mean transit time, 131.1 (60.2) seconds in 10 kidneys. The ROI pharmacokinetic analysis provided estimates (mean (SD)) of quantitative renal parameters after a quarter dose: 218.1 (57.1) mL/min per 100 mL; plasma mean transit time, 4.8 (2.2) seconds; renal filtration, 28.7 (10.0) mL/min per 100 mL; and tubular mean transit time

  3. k-TE generalized autocalibrating partially parallel acquisition (GRAPPA) for accelerated multiple gradient-recalled echo (MGRE) R2* mapping in the abdomen.

    PubMed

    Yin, Xiaoming; Larson, Andrew C

    2009-03-01

    Multiple gradient-recalled echo (MGRE) methods are commonly used for abdominal R(2)* mapping. Accelerated MGRE acquisitions would offer the potential to shorten requisite breathhold times and/or increase spatial resolution and coverage. In both phantom and normal volunteer studies, view-sharing (VS) methods, generalized autocalibrating partially parallel acquisition (GRAPPA) methods, and newly proposed k-echo time (k-TE) GRAPPA methods were compared for the purpose of accelerating MGRE acquisitions. Utilization of water-selective spatial spectral excitation pulses reduced artifact levels for both VS and k-TE GRAPPA approaches. VS approaches were found to be highly sensitive to off-resonance effects, particularly at increasing acceleration rates. k-TE GRAPPA significantly reduced residual artifact levels compared to GRAPPA approaches while improving the accuracy of accelerated abdominal R(2)* measurements. These initial feasibility studies demonstrate that k-TE GRAPPA is an effective method to reduce scan times during abdominal R(2)*-mapping procedures.

  4. Functional MRI using regularized parallel imaging acquisition.

    PubMed

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K

    2005-08-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions.

  5. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  6. Parallel Spectral Acquisition with Orthogonal ICR Cells.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Bruce, James E

    2017-03-01

    FT-based high performance mass analyzers yield increased resolving power and mass measurement accuracy, yet require increased duration of signal acquisition that can limit many applications. The implementation of stronger magnetic fields, multiple detection electrodes for harmonic signal detection, and an array of multiple mass analyzers arranged along the magnetic field axis have been used to decrease required acquisition time. The results presented here show that multiple ion cyclotron resonance (ICR) mass analyzers can also be implemented orthogonal to the central magnetic field axis. The orthogonal ICR cell system presented here consisting of two cells (master and slave cells) was constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. A master cell was positioned, as is normally done with ICR cells, on the central magnetic field axis and a slave cell was located off this central axis, but directly adjacent and alongside the master cell. To achieve ion transfer between cells, ions that were initially trapped in the master cell were drifted across the magnetic field into the slave cell with application of a small DC field applied perpendicularly to the magnetic field axis. A subsequent population of ions was injected and accumulated in the master cell. Simultaneous excitation of cyclotron motion of ions in both cells was carried out; ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition with this orthogonal dual ICR cell array. Graphical Abstract ᅟ.

  7. Parallel Spectral Acquisition with Orthogonal ICR Cells

    NASA Astrophysics Data System (ADS)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2017-03-01

    FT-based high performance mass analyzers yield increased resolving power and mass measurement accuracy, yet require increased duration of signal acquisition that can limit many applications. The implementation of stronger magnetic fields, multiple detection electrodes for harmonic signal detection, and an array of multiple mass analyzers arranged along the magnetic field axis have been used to decrease required acquisition time. The results presented here show that multiple ion cyclotron resonance (ICR) mass analyzers can also be implemented orthogonal to the central magnetic field axis. The orthogonal ICR cell system presented here consisting of two cells (master and slave cells) was constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. A master cell was positioned, as is normally done with ICR cells, on the central magnetic field axis and a slave cell was located off this central axis, but directly adjacent and alongside the master cell. To achieve ion transfer between cells, ions that were initially trapped in the master cell were drifted across the magnetic field into the slave cell with application of a small DC field applied perpendicularly to the magnetic field axis. A subsequent population of ions was injected and accumulated in the master cell. Simultaneous excitation of cyclotron motion of ions in both cells was carried out; ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition with this orthogonal dual ICR cell array.

  8. Parallel Spectral Acquisition with Orthogonal ICR Cells

    NASA Astrophysics Data System (ADS)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2017-01-01

    FT-based high performance mass analyzers yield increased resolving power and mass measurement accuracy, yet require increased duration of signal acquisition that can limit many applications. The implementation of stronger magnetic fields, multiple detection electrodes for harmonic signal detection, and an array of multiple mass analyzers arranged along the magnetic field axis have been used to decrease required acquisition time. The results presented here show that multiple ion cyclotron resonance (ICR) mass analyzers can also be implemented orthogonal to the central magnetic field axis. The orthogonal ICR cell system presented here consisting of two cells (master and slave cells) was constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. A master cell was positioned, as is normally done with ICR cells, on the central magnetic field axis and a slave cell was located off this central axis, but directly adjacent and alongside the master cell. To achieve ion transfer between cells, ions that were initially trapped in the master cell were drifted across the magnetic field into the slave cell with application of a small DC field applied perpendicularly to the magnetic field axis. A subsequent population of ions was injected and accumulated in the master cell. Simultaneous excitation of cyclotron motion of ions in both cells was carried out; ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition with this orthogonal dual ICR cell array.

  9. Partially parallel imaging with phase-sensitive data: Increased temporal resolution for magnetic resonance temperature imaging.

    PubMed

    Bankson, James A; Stafford, R Jason; Hazle, John D

    2005-03-01

    Magnetic resonance temperature imaging can be used to monitor the progress of thermal ablation therapies, increasing treatment efficacy and improving patient safety. High temporal resolution is important when therapies rapidly heat tissue, but many approaches to faster image acquisition compromise image resolution, slice coverage, or phase sensitivity. Partially parallel imaging techniques offer the potential for improved temporal resolution without forcing such concessions. Although these techniques perturb image phase, relative phase changes between dynamically acquired phase-sensitive images, such as those acquired for MR temperature imaging, can be reliably measured through partially parallel imaging techniques using reconstruction filters that remain constant across the series. Partially parallel and non-accelerated phase-difference-sensitive data can be obtained through arrays of surface coils using this method. Average phase differences measured through partially parallel and fully Fourier encoded images are virtually identical, while phase noise increases with g(sqrt)L as in standard partially parallel image acquisitions..

  10. Parallel Spectral Acquisition with an ICR Cell Array

    PubMed Central

    Park, Sung-Gun; Anderson, Gordon A.; Navare, Arti T.; Bruce, James E.

    2016-01-01

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel MS and MS/MS measurements, and parallel high resolution acquisition with the MS array system. PMID:26669509

  11. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  12. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  13. New architecture of fast parallel multiplier using fast parallel counter with FPA (first partial product addition)

    NASA Astrophysics Data System (ADS)

    Lee, Mike M.; Cho, Byung Lok

    2001-11-01

    In this paper, we proposed a new First Partial product Addition (FPA) architecture with new compressor (or parallel counter) to CSA tree built in the process of adding partial product for improving speed in the fast parallel multiplier to improve the speed of calculating partial product by about 20% compared with existing parallel counter using full Adder. The new circuit reduces the CLA bit finding final sum by N/2 using the novel FPA architecture. A 5.14ns of multiplication speed of the 16X16 multiplier is obtained using 0.25um CMOS technology. The architecture of the multiplier is easily opted for pipeline design and demonstrates high speed performance.

  14. The Force Singularity for Partially Immersed Parallel Plates

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Rajat; Finn, Robert

    2016-12-01

    In earlier work, we provided a general description of the forces of attraction and repulsion, encountered by two parallel vertical plates of infinite extent and of possibly differing materials, when partially immersed in an infinite liquid bath and subject to surface tension forces. In the present study, we examine some unusual details of the exotic behavior that can occur at the singular configuration separating infinite rise from infinite descent of the fluid between the plates, as the plates approach each other. In connection with this singular behavior, we present also some new estimates on meniscus height details.

  15. Solution of partial differential equations on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1985-01-01

    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.

  16. Parallel acquisition of awareness and trace eyeblink classical conditioning.

    PubMed

    Manns, J R; Clark, R E; Squire, L R

    2000-01-01

    Trace eyeblink conditioning (with a trace interval >/=500 msec) depends on the integrity of the hippocampus and requires that participants develop awareness of the stimulus contingencies (i.e., awareness that the conditioned stimulus [CS] predicts the unconditioned stimulus [US]). Previous investigations of the relationship between trace eyeblink conditioning and awareness of the stimulus contingencies have manipulated awareness or have assessed awareness at fixed intervals during and after the conditioning session. In this study, we tracked the development of knowledge about the stimulus contingencies trial by trial by asking participants to try to predict either the onset of the US or the onset of their eyeblinks during differential trace eyeblink conditioning. Asking participants to predict their eyeblinks inhibited both the acquisition of awareness and eyeblink conditioning. In contrast, asking participants to predict the onset of the US promoted awareness and facilitated conditioning. Acquisition of knowledge about the stimulus contingencies and acquisition of differential trace eyeblink conditioning developed approximately in parallel (i.e., concurrently).

  17. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI.

    PubMed

    Goerner, Frank L; Duong, Timothy; Stafford, R Jason; Clarke, Geoffrey D

    2013-08-01

    To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R=1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two

  18. Software Compression for Partially Parallel Imaging with Multi-channels.

    PubMed

    Huang, Feng; Vijayakumar, Sathya; Akao, James

    2005-01-01

    In magnetic resonance imaging, multi-channel phased array coils enjoy a high signal to noise ratio (SNR) and better parallel imaging performance. But with the increase in number of channels, the reconstruction time and requirement for computer memory become inevitable problems. In this work, principle component analysis is applied to reduce the size of data and protect the performance of parallel imaging. Clinical data collected using a 32-channel cardiac coil are used in the experiments. Experimental results show that the proposed method dramatically reduces the processing time without much damage to the reconstructed image.

  19. Adaptive Methods and Parallel Computation for Partial Differential Equations

    DTIC Science & Technology

    1992-05-01

    E. Batcher, W. C. Meilander, and J. L. Potter, Eds ., Proceedings of the Inter- national Conference on Parallel Processing, Computer Society Press...11. P. L. Baehmann, S. L. Wittchen , M. S. Shephard, K. R. Grice, and M. A. Yerry, Robust, geometrically based, automatic two-dimensional mesh

  20. Generating Parallel Execution Plans with a Partial Order Planner

    DTIC Science & Technology

    1994-05-01

    the atomic act ion assumptipon01 atid they c an be executed in parallel. Thie setniant ins of stents front the fact that th lie S1 11’s-it% yle repri...1976), O-PLAN (Currie & Tate and only if, for all conditions that are relevant 1991), MP, and MPI (Kambhampati 1994). The class to achieving G, the

  1. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  2. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  3. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  4. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    SciTech Connect

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  5. Fermilab Fast Parallel Readout System for Data Acquisition

    NASA Astrophysics Data System (ADS)

    Vignoni, R.; Barsotti, E.; Bracker, S.; Hansen, S.; Pordes, R.; Treptow, K.; White, V.; Wickert, S.

    1987-08-01

    Three modules have recently been developed at Fermilab to provide high speed parallel readout of data for high energy physics experiments. This paper describes how these modules provide a fast and efficient method for transferring CAMAC event data into VME-based or FASTBUS-based memories, thus enhancing and extending the usefulness of experiments' large investments in CAMAC hardware. Using these modules can decrease the dead time of an experiment by up to a factor of 10. This paper includes a discussion of the experiment topologies In which these modules are being used.

  6. Performance of a VME-based parallel processing LIDAR data acquisition system (summary)

    SciTech Connect

    Moore, K.; Buttler, B.; Caffrey, M.; Soriano, C.

    1995-05-01

    It may be possible to make accurate real time, autonomous, 2 and 3 dimensional wind measurements remotely with an elastic backscatter Light Detection and Ranging (LIDAR) system by incorporating digital parallel processing hardware into the data acquisition system. In this paper, we report the performance of a commercially available digital parallel processing system in implementing the maximum correlation technique for wind sensing using actual LIDAR data. Timing and numerical accuracy are benchmarked against a standard microprocessor impementation.

  7. Pseudolesions arising from unfolding artifacts in diffusion imaging with use of parallel acquisition: origin and remedies.

    PubMed

    Chou, M-C; Wang, C-Y; Liu, H-S; Chung, H-W; Chen, C-Y

    2007-01-01

    Diffusion imaging acquired with echo-planar imaging (EPI) is usually performed with parallel imaging to reduce geometric distortions, especially at high fields. This study reports the occurrence of pseudolesions in EPI with parallel imaging. The unfolding artifacts are attributed as arising from a mismatch between RF sensitivity profiles and distorted acquisition data in the presence of susceptibility effects, plus strong signals on the b=0 images. Examples of pseudolesions from the eyeballs are shown, and remedies are suggested.

  8. Modeling Parallelization and Flexibility Improvements in Skill Acquisition: From Dual Tasks to Complex Dynamic Skills

    ERIC Educational Resources Information Center

    Taatgen, Niels

    2005-01-01

    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradually learns task-specific rules from instructions…

  9. Modeling Parallelization and Flexibility Improvements in Skill Acquisition: From Dual Tasks to Complex Dynamic Skills

    ERIC Educational Resources Information Center

    Taatgen, Niels

    2005-01-01

    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradually learns task-specific rules from instructions…

  10. A parallel performance study of the Cartesian method for partial differential equations on a sphere

    SciTech Connect

    Drake, J.B.; Coddington, M.P.

    1997-04-01

    A 3-D Cartesian method for integration of partial differential equations on a spherical surface is developed for parallel computation. The target computer architectures are distributed memory, message passing computers such as the Intel Paragon. The parallel algorithms are described along with mesh partitioning strategies. Performance of the algorithms is considered for a standard test case of the shallow water equations on the sphere. The authors find the computation time scale well with increasing numbers of processors.

  11. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  12. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  13. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  14. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  15. Effect of continuous and partial reinforcement on the acquisition and extinction of human conditioned fear.

    PubMed

    Grady, Ashley K; Bowen, Kenton H; Hyde, Andrew T; Totsch, Stacie K; Knight, David C

    2016-02-01

    Extinction of Pavlovian conditioned fear in humans is a popular paradigm often used to study learning and memory processes that mediate anxiety-related disorders. Fear extinction studies often only pair the conditioned stimulus (CS) and unconditioned stimulus (UCS) on a subset of acquisition trials (i.e., partial reinforcement/pairing) to prolong extinction (i.e., partial reinforcement extinction effect; PREE) and provide more time to study the process. However, there is limited evidence that the partial pairing procedures typically used during fear conditioning actually extend the extinction process, while there is strong evidence these procedures weaken conditioned response (CR) acquisition. Therefore, determining conditioning procedures that support strong CR acquisition and that also prolong the extinction process would benefit the field. The present study investigated 4 separate CS-UCS pairing procedures to determine methods that support strong conditioning and that also exhibit a PREE. One group (C-C) of participants received continuous CS-UCS pairings; a second group (C-P) received continuous followed by partial CS-UCS pairings; a third group (P-C) received partial followed by continuous CS-UCS pairings; and a fourth group (P-P) received partial CS-UCS pairings during acquisition. A strong skin conductance CR was expressed by C-C and P-C groups but not by C-P and P-P groups at the end of the acquisition phase. The P-C group maintained the CR during extinction. In contrast, the CR extinguished quickly within the C-C group. These findings suggest that partial followed by continuous CS-UCS pairings elicit strong CRs and prolong the extinction process following human fear conditioning.

  16. Parent-Implemented Mand Training: Acquisition of Framed Manding in a Young Boy with Partial Hemispherectomy

    ERIC Educational Resources Information Center

    Ingvarsson, Einar T.

    2011-01-01

    This study examined the effects of parent-implemented mand training on the acquisition of framed manding in a 4-year-old boy who had undergone partial hemispherectomy. Framed manding became the predominant mand form when and only when the intervention was implemented with each preferred toy, but minimal generalization to untrained toys …

  17. Brains for birds and babies: Neural parallels between birdsong and speech acquisition.

    PubMed

    Prather, Jonathan; Okanoya, Kazuo; Bolhuis, Johan J

    2017-01-10

    Language as a computational cognitive mechanism appears to be unique to the human species. However, there are remarkable behavioral similarities between song learning in songbirds and speech acquisition in human infants that are absent in non-human primates. Here we review important neural parallels between birdsong and speech. In both cases there are separate but continually interacting neural networks that underlie vocal production, sensorimotor learning, and auditory perception and memory. As in the case of human speech, neural activity related to birdsong learning is lateralized, and mirror neurons linking perception and performance may contribute to sensorimotor learning. In songbirds that are learning their songs, there is continual interaction between secondary auditory regions and sensorimotor regions, similar to the interaction between Wernicke's and Broca's areas in human infants acquiring speech and language. Taken together, song learning in birds and speech acquisition in humans may provide useful insights into the evolution and mechanisms of auditory-vocal learning.

  18. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2010-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by studying natural speech acquisition, and it provides a means of probing the boundaries and constraints that general auditory perception and cognition bring to the task of speech category learning. In this study, we used a multimodal, video-game-based implicit learning paradigm to train participants to categorize acoustically complex, nonlinguistic sounds. Mismatch negativity responses to the nonspeech stimuli were collected before and after training to investigate the degree to which neural changes supporting the learning of these nonspeech categories parallel those typically observed for speech category acquisition. Results indicate that changes in mismatch negativity resulting from the nonspeech category learning closely resemble patterns of change typically observed during speech category learning. This suggests that the often-observed “specialized” neural responses to speech sounds may result, at least in part, from the expertise we develop with speech categories through experience rathr than from properties unique to speech (e.g., linguistic or vocal tract gestural information). Furthermore, particular characteristics of the training paradigm may inform our understanding of mechanisms that support natural speech acquisition. PMID:19929331

  19. The role of contextual associations in producing the partial reinforcement acquisition deficit.

    PubMed

    Miguez, Gonzalo; Witnauer, James E; Miller, Ralph R

    2012-01-01

    Three conditioned suppression experiments with rats as subjects assessed the contributions of the conditioned stimulus (CS)-context and context-unconditioned stimulus (US) associations to the degraded stimulus control by the CS that is observed following partial reinforcement relative to continuous reinforcement training. In Experiment 1, posttraining associative deflation (i.e., extinction) of the training context after partial reinforcement restored responding to a level comparable to the one produced by continuous reinforcement. In Experiment 2, posttraining associative inflation of the context (achieved by administering unsignaled outcome presentations in the context) enhanced the detrimental effect of partial reinforcement. Experiment 3 found that the training context must be an effective competitor to produce the partial reinforcement acquisition deficit. When the context was down-modulated, the target regained behavioral control thereby demonstrating higher-order retrospective revaluation. The results are discussed in terms of retrospective revaluation, and are used to contrast the predictions of a performance-focused model with those of an acquisition-focused model.

  20. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  1. A study of the partial acquisition technique to reduce the amount of SAR data

    NASA Astrophysics Data System (ADS)

    Arief, Rahmat; Sudiana, Dodi; Ramli, Kalamullah

    2017-01-01

    Synthetic Aperture Radar (SAR) technology is capable to provide high resolution image data of earth surfaces from a moving vehicle. This causes large volumes of raw data. Many researchs were proposed about compressed radar imaging, which can reduce the sampling rate of the analog digital converter (ADC) on the receiver and eliminate the need of match filter on the radar receiver. Besides the advantages, there is a major problem that produces a large measurement matrix, which causes a very intensive matrix calculation. In this paper is studied a new approach to partial acquisition technique to reduce the amount of raw data using compressed sampling in both the azimuth and range and to reduce the computational load. The results showed that the reconstruction of SAR image using partial acquisition model has good resolution comparable to the conventional method (Range Doppler Algorithm). On a target of a ship, that represents a low level sparsity, a good reconstruction image could be achieved from a fewer number measurement. The method can speed up the computation time by a factor of 2.64 to 4.49 times faster than with a full acquisition matrix.

  2. Parallel proteomics to improve coverage and confidence in the partially annotated Oryctolagus cuniculus mitochondrial proteome.

    PubMed

    White, Melanie Y; Brown, David A; Sheng, Simon; Cole, Robert N; O'Rourke, Brian; Van Eyk, Jennifer E

    2011-02-01

    The ability to decipher the dynamic protein component of any system is determined by the inherent limitations of the technologies used, the complexity of the sample, and the existence of an annotated genome. In the absence of an annotated genome, large-scale proteomic investigations can be technically difficult. Yet the functional and biological species differences across animal models can lead to selection of partially or nonannotated organisms over those with an annotated genome. The outweighing of biology over technology leads us to investigate the degree to which a parallel approach can facilitate proteome coverage in the absence of complete genome annotation. When studying species without complete genome annotation, a particular challenge is how to ensure high proteome coverage while meeting the bioinformatic stringencies of high-throughput proteomics. A protein inventory of Oryctolagus cuniculus mitochondria was created by overlapping "protein-centric" and "peptide-centric" one-dimensional and two-dimensional liquid chromatography strategies; with additional partitioning into membrane-enriched and soluble fractions. With the use of these five parallel approaches, 2934 unique peptides were identified, corresponding to 558 nonredundant protein groups. 230 of these proteins (41%) were identified by only a single technical approach, confirming the need for parallel techniques to improve annotation. To determine the extent of coverage, a side-by-side comparison with human and mouse cardiomyocyte mitochondrial studies was performed. A nonredundant list of 995 discrete proteins was compiled, of which 244 (25%) were common across species. The current investigation identified 142 unique protein groups, the majority of which were detected here by only one technical approach, in particular peptide- and protein-centric two-dimensional liquid chromatography. Although no single approach achieved more than 40% coverage, the combination of three approaches (protein- and

  3. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed.

  4. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs.

  5. Parallels between control PDE's (Partial Differential Equations) and systems of ODE's (Ordinary Differential Equations)

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Villarreal, Ramiro

    1987-01-01

    System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.

  6. Parallelizing across time when solving time-dependent partial differential equations

    SciTech Connect

    Worley, P.H.

    1991-09-01

    The standard numerical algorithms for solving time-dependent partial differential equations (PDEs) are inherently sequential in the time direction. This paper describes algorithms for the time-accurate solution of certain classes of linear hyperbolic and parabolic PDEs that can be parallelized in both time and space and have serial complexities that are proportional to the serial complexities of the best known algorithms. The algorithms for parabolic PDEs are variants of the waveform relaxation multigrid method (WFMG) of Lubich and Ostermann where the scalar ordinary differential equations (ODEs) that make up the kernel of WFMG are solved using a cyclic reduction type algorithm. The algorithms for hyperbolic PDEs use the cyclic reduction algorithm to solve ODEs along characteristics. 43 refs.

  7. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  8. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Partial exemption for acquisitions in connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42... HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 EXEMPTION RULES § 802.42 Partial exemption for...

  9. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Partial exemption for acquisitions in connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42... HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 EXEMPTION RULES § 802.42 Partial exemption for...

  10. L2 and Deaf Learners' Knowledge of Numerically Quantified English Sentences: Acquisitional Parallels at the Semantics/Discourse-Pragmatics Interface

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Schueler-Choukairi, Tanya

    2012-01-01

    This study assessed knowledge of numerically quantified English sentences in two learner populations--second language (L2) learners and deaf learners--whose acquisition of English occurs under conditions of restricted access to the target language input. Under the experimental test conditions, interlanguage parallels were predicted to arise from…

  11. Cascade connection serial parallel hybrid acquisition synchronization method for DS-FHSS in air-ground data link

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhou, Desuo

    2007-11-01

    In air-ground tactical data link system, a kind of primary anti jamming technology adopted is direct sequence - frequency hopping spread spectrum (DS-FHSS) technology. However, how to implement the quick synchronization of DS-FHSS is an important technology problem, which could influence the whole communication capability of system. Thinking of the application demand of anti jamming technology in actual, a kind of cascade connection serial parallel hybrid acquisition synchronization method is given for the DS-FHSS system. The synchronization consists of two stages. The synchronization of FH communication is adopted at the first stage, and the serial parallel hybrid structure is adopted for the DS PN code synchronization at the secondary stage. Through calculating the detect probability of the FH synchronization acquisition and the acquisition time of DS code chip synchronization, the contribution to the synchronization capability of system by this method in this paper is analyzed. Finally, through simulating on computer, the performance estimate about this cascade connection serial parallel hybrid acquisition synchronization method is given.

  12. Parallel Numerical Solution Process of a Two Dimensional Time Dependent Nonlinear Partial Differential Equation

    NASA Astrophysics Data System (ADS)

    Martin, I.; Tirado, F.; Vazquez, L.

    We present a process to achieve the solution of the two dimensional nonlinear Schrödinger equation using a multigrid technique on a distributed memory machine. Some features about the multigrid technique as its good convergence and parallel properties are explained in this paper. This makes multigrid method the optimal one to solve the systems of equations arising at each time step from an implicit numerical scheme. We give some experimental results about the parallel numerical simulation of this equation on a message passing parallel machine.

  13. High-performance partially aligned semiconductive single-walled carbon nanotube transistors achieved with a parallel technique.

    PubMed

    Wang, Yilei; Pillai, Suresh Kumar Raman; Chan-Park, Mary B

    2013-09-09

    Single-walled carbon nanotubes (SWNTs) are widely thought to be a strong contender for next-generation printed electronic transistor materials. However, large-scale solution-based parallel assembly of SWNTs to obtain high-performance transistor devices is challenging. SWNTs have anisotropic properties and, although partial alignment of the nanotubes has been theoretically predicted to achieve optimum transistor device performance, thus far no parallel solution-based technique can achieve this. Herein a novel solution-based technique, the immersion-cum-shake method, is reported to achieve partially aligned SWNT networks using semiconductive (99% enriched) SWNTs (s-SWNTs). By immersing an aminosilane-treated wafer into a solution of nanotubes placed on a rotary shaker, the repetitive flow of the nanotube solution over the wafer surface during the deposition process orients the nanotubes toward the fluid flow direction. By adjusting the nanotube concentration in the solution, the nanotube density of the partially aligned network can be controlled; linear densities ranging from 5 to 45 SWNTs/μm are observed. Through control of the linear SWNT density and channel length, the optimum SWNT-based field-effect transistor devices achieve outstanding performance metrics (with an on/off ratio of ~3.2 × 10(4) and mobility 46.5 cm(2) /Vs). Atomic force microscopy shows that the partial alignment is uniform over an area of 20 × 20 mm(2) and confirms that the orientation of the nanotubes is mostly along the fluid flow direction, with a narrow orientation scatter characterized by a full width at half maximum (FWHM) of <15° for all but the densest film, which is 35°. This parallel process is large-scale applicable and exploits the anisotropic properties of the SWNTs, presenting a viable path forward for industrial adoption of SWNTs in printed, flexible, and large-area electronics.

  14. Supraaortic arteries: contrast-enhanced MR angiography at 3.0 T--highly accelerated parallel acquisition for improved spatial resolution over an extended field of view.

    PubMed

    Nael, Kambiz; Villablanca, J Pablo; Pope, Whitney B; McNamara, Thomas O; Laub, Gerhard; Finn, J Paul

    2007-02-01

    To prospectively use 3.0-T breath-hold high-spatial-resolution contrast material-enhanced magnetic resonance (MR) angiography with highly accelerated parallel acquisition to image the supraaortic arteries of patients suspected of having arterial occlusive disease. Institutional review board approval and written informed consent were obtained for this HIPAA-compliant study. Eighty patients (44 men, 36 women; age range, 44-90 years) underwent contrast-enhanced MR angiography of the head and neck at 3.0 T with an eight-channel neurovascular array coil. By applying a generalized autocalibrating partially parallel acquisition algorithm with an acceleration factor of four, high-spatial-resolution (0.7 x 0.7 x 0.9 mm = 0.44-mm(3) voxels) three-dimensional contrast-enhanced MR angiography was performed during a 20-second breath hold. Two neuroradiologists evaluated vascular image quality and arterial stenoses. Interobserver variability was tested with the kappa coefficient. Quantitation of stenosis at MR angiography was compared with that at digital subtraction angiography (DSA) (n = 13) and computed tomographic (CT) angiography (n = 12) with Spearman rank correlation coefficient (R(s)). Arterial stenoses were detected with contrast-enhanced MR angiography in 208 (reader 1) and 218 (reader 2) segments, with excellent interobserver agreement (kappa = 0.80). There was a significant correlation between contrast-enhanced MR angiography and CT angiography (R(s) = 0.95, reader 1; R(s) = 0.87, reader 2) and between contrast-enhanced MR angiography and DSA (R(s) = 0.94, reader 1; R(s) = 0.92, reader 2) for the degree of stenosis. Sensitivity and specificity of contrast-enhanced MR angiography for detection of arterial stenoses greater than 50% were 94% and 98% for reader 1 and 100% and 98% for reader 2, with DSA as the standard of reference. Vascular image quality was sufficient for diagnosis or excellent for 97% of arterial segments evaluated. By using highly accelerated parallel

  15. Cardiac magnetic resonance imaging using radial k-space sampling and self-calibrated partial parallel reconstruction.

    PubMed

    Xie, Jingsi; Lai, Peng; Huang, Feng; Li, Yu; Li, Debiao

    2010-05-01

    Radial sampling has been demonstrated to be potentially useful in cardiac magnetic resonance imaging because it is less susceptible to motion than Cartesian sampling. Nevertheless, its capability of imaging acceleration remains limited by undersampling-induced streaking artifacts. In this study, a self-calibrated reconstruction method was developed to suppress streaking artifacts for highly accelerated parallel radial acquisitions in cardiac magnetic resonance imaging. Two- (2D) and three-dimensional (3D) radial k-space data were collected from a phantom and healthy volunteers. Images reconstructed using the proposed method and the conventional regridding method were compared based on statistical analysis on a four-point scale imaging scoring. It was demonstrated that the proposed method can effectively remove undersampling streaking artifacts and significantly improve image quality (P<.05). With the use of the proposed method, image score (1-4, 1=poor, 2=good, 3=very good, 4=excellent) was improved from 2.14 to 3.34 with the use of an undersampling factor of 4 and from 1.09 to 2.5 with the use of an undersampling factor of 8. Our study demonstrates that the proposed reconstruction method is effective for highly accelerated cardiac imaging applications using parallel radial acquisitions without calibration data.

  16. Partial hippocampal kindling affects retention but not acquisition and place but not cue tasks on the radial arm maze.

    PubMed

    Leung, L S; Brzozowski, D; Shen, B

    1996-10-01

    The performance of rats that were partially kindled in the hippocampus was assessed on an 8-arm radial arm maze with 4 baited arms. In rats first trained and then kindled, deficits were found on a place task in which rats reached the goal arms of the maze using salient extramaze spatial cues, but not on an intramaze cue task in which rats reached the goal arms using salient intramaze cues. Acquisition of a new place task on the maze was not different between kindled and control rats. In conclusion, partial hippocampal kindling disrupted the retention but not the acquisition of a spatial or place task; retention of a nonspatial cue task was not disrupted.

  17. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family

    ERIC Educational Resources Information Center

    Cramér-Wolrath, Emelie

    2013-01-01

    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  18. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family

    ERIC Educational Resources Information Center

    Cramér-Wolrath, Emelie

    2013-01-01

    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  19. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  20. New equation for the computation of flow velocity in partially filled pipes arranged in parallel.

    PubMed

    Zeghadnia, Lotfi; Djemili, Lakhdar; Houichi, Larbi; Rezgui, Nouredin

    2014-01-01

    This paper presents a new approach for the computation of flow velocity in pipes arranged in parallel based on an analytic development. The estimation of the flow parameters using existing methods requires trial and error procedures. The assessment of flow velocity is of great importance in flow measurement methods and in the design of drainage networks, among others. In drainage network design, the flow is mostly of free surface type. A new method is developed to eliminate the need for trial methods, where the computation of the flow velocity becomes easy, simple, and direct with zero deviation compared to Manning equation results and other approaches such as that have been considered as the best existing solutions. This research work shows that these approaches lack accuracy and do not cover the entire range of flow surface angles: 0° ≤ θ ≤ 360°.

  1. Partial dopaminergic denervation-induced impairment in stimulus discrimination acquisition in parkinsonian rats: a model for early Parkinson's disease.

    PubMed

    Eagle, Andrew L; Olumolade, Oluyemi O; Otani, Hajime

    2015-03-01

    Parkinson's disease (PD) produces progressive nigrostriatal dopamine (DA) denervation resulting in cognitive and motor impairment. However, it is unknown whether cognitive impairments, such as instrumental learning deficits, are associated with the early stage PD-induced mild DA denervation. The current study sought to model early PD-induced instrumental learning impairments by assessing the effects of low dose (5.5μg), bilateral 6OHDA-induced striatal DA denervation on acquisition of instrumental stimulus discrimination in rats. 6OHDA (n=20) or sham (n=10) lesioned rats were tested for stimulus discrimination acquisition either 1 or 2 weeks post surgical lesion. Stimulus discrimination acquisition across 10 daily sessions was used to assess discriminative accuracy, or a probability measure of the shift toward reinforced responding under one stimulus condition (Sd) away from extinction, when reinforcement was withheld, under another (S(d) phase). Striatal DA denervation was assayed by tyrosine hydroxylase (TH) staining intensity. Results indicated that 6OHDA lesions produced significant loss of dorsal striatal TH staining intensity and marked impairment in discrimination acquisition, without inducing akinetic motor deficits. Rather 6OHDA-induced impairment was associated with perseveration during extinction (S(Δ) phase). These findings suggest that partial, bilateral striatal DA denervation produces instrumental learning deficits, prior to the onset of gross motor impairment, and suggest that the current model is useful for investigating mild nigrostriatal DA denervation associated with early stage clinical PD.

  2. Single-shot magnetic resonance spectroscopic imaging with partial parallel imaging.

    PubMed

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2009-03-01

    A magnetic resonance spectroscopic imaging (MRSI) pulse sequence based on proton-echo-planar-spectroscopic-imaging (PEPSI) is introduced that measures two-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3-T whole-body scanner equipped with a 12-channel array coil. Four-step interleaved phase encoding and fourfold SENSE acceleration were used to encode a 16 x 16 spatial matrix with a 390-Hz spectral width. Comparison with conventional PEPSI and PEPSI with fourfold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor-related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of inositol, choline, creatine, and N-acetyl-aspartate (NAA) in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement.

  3. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    PubMed

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing.

  4. Emergence of intrinsic bursting in trigeminal sensory neurons parallels the acquisition of mastication in weanling rats.

    PubMed

    Brocard, Frédéric; Verdier, Dorly; Arsenault, Isabel; Lund, James P; Kolta, Arlette

    2006-11-01

    There is increasing evidence that a subpopulation of neurons in the dorsal principal sensory trigeminal nucleus are not simple sensory relays to the thalamus but may form the core of the central pattern generating circuits responsible for mastication. In this paper, we used whole cell patch recordings in brain stem slices of young rats to show that these neurons have intrinsic bursting abilities that persist in absence of extracellular Ca(2+). Application of different K(+) channel blockers affected duration and firing rate of bursts, but left bursting ability intact. Bursting was voltage dependent and was abolished by low concentrations of Na(+) channel blockers. The proportion of bursting neurons increased dramatically in the second postnatal week, in parallel with profound changes in several electrophysiological properties. This is the period in which masticatory movements appear and mature. Bursting was associated with the development of an afterdepolarization that depend on maturation of a persistent sodium conductance (I(NaP)). An interesting finding was that the occurrence of bursting and the magnitude of I(NaP) were both modulated by the extracellular concentration of Ca(2+). Lowering extracellular [Ca(2+)] increased both I(NaP) and probability of bursting. We suggest that these mechanisms underlie burst generation in mastication and that similar processes may be found in other motor pattern generators.

  5. Pump in Parallel-Mechanical Assistance of Partial Cavopulmonary Circulation Using a Conventional Ventricular Assist Device.

    PubMed

    Sinha, Pranava; Deutsch, Nina; Ratnayaka, Kanishka; He, Dingchao; Peer, Murfad; Kurkluoglu, Mustafa; Nuszkowski, Mark; Montague, Erin; Mikesell, Gerald; Zurakowski, David; Jonas, Richard

    2017-06-15

    Mechanical assistance of systemic single ventricle is effective in pulling blood through a cavopulmonary circuit. In patients with superior cavopulmonary connection, this strategy can lead to arterial desaturation secondary to increased inferior caval flow. We hypothesized that overall augmentation in cardiac output with mechanical assistance compensates for the drop in oxygen saturation thereby maintaining tissue oxygen delivery (DO2). Bidirectional Glenn (BDG) was established in seven swine (25 kg) after a common atrium had been established by balloon septostomy. Mechanical circulatory assistance of the single ventricle was achieved using an axial flow pump with ventricular inflow and aortic outflow. Cardiac output, mean pulmonary artery pressure (PAP), common atrial pressure (left atrial pressure [LAP]), arterial oxygen saturation (SaO2), partial pressure of arterial oxygen (PaO2), and DO2 were compared between assisted and nonassisted circulation. Significant augmentation of cardiac output was achieved with mechanical assistance in BDG circulation (BDG: median [interquartile range {IQR}], 0.8 [0.9-1.15] L/min versus assisted BDG: median [IQR], 1.5 [1.15-1.7] L/min; p = 0.05). Although oxygen saturations and PaO2 trended to be lower with assistance (SaO2; BDG: median [IQR], 43% [32-57%]; assisted BDG: median [IQR], 32% [24-35%]; p = 0.07) (PaO2; BDG: median [IQR], 24 [20-30] mm Hg; assisted BDG: median [IQR], 20 [17-21] mm Hg; p = 0.08), DO2 was unchanged with mechanical assistance (BDG: median [IQR], 94 [35-99] ml/min; assisted BDG: median [IQR], 79 [63-85] ml/min; p = 0.81). No significant change in the LAP or PAP was observed. In the setting of superior cavopulmonary connection/single ventricle, the systemic ventricular assistance with a ventricular assist device (VAD) leads to increase in cardiac output. Arterial oxygen saturations however may be lower with mechanical assistance, without any change in DO2.

  6. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  7. Sinusoidal echo-planar imaging with parallel acquisition technique for reduced acoustic noise in auditory fMRI.

    PubMed

    Zapp, Jascha; Schmitter, Sebastian; Schad, Lothar R

    2012-09-01

    To extend the parameter restrictions of a silent echo-planar imaging (sEPI) sequence using sinusoidal readout (RO) gradients, in particular with increased spatial resolution. The sound pressure level (SPL) of the most feasible configurations is compared to conventional EPI having trapezoidal RO gradients. We enhanced the sEPI sequence by integrating a parallel acquisition technique (PAT) on a 3 T magnetic resonance imaging (MRI) system. The SPL was measured for matrix sizes of 64 × 64 and 128 × 128 pixels, without and with PAT (R = 2). The signal-to-noise ratio (SNR) was examined for both sinusoidal and trapezoidal RO gradients. Compared to EPI PAT, the SPL could be reduced by up to 11.1 dB and 5.1 dB for matrix sizes of 64 × 64 and 128 × 128 pixels, respectively. The SNR of sinusoidal RO gradients is lower by a factor of 0.96 on average compared to trapezoidal RO gradients. The sEPI PAT sequence allows for 1) increased resolution, 2) expanded RO frequency range toward lower frequencies, which is in general beneficial for SPL, or 3) shortened TE, TR, and RO train length. At the same time, it generates lower SPL compared to conventional EPI for a wide range of RO frequencies while having the same imaging parameters. Copyright © 2012 Wiley Periodicals, Inc.

  8. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  9. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition.

    PubMed

    Bruder, H; Raupach, R; Sunnegardh, J; Allmendinger, T; Klotz, E; Stierstorfer, K; Flohr, T

    2015-11-07

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high.In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR 'source' image data, while maintaining their resolution.It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover spatial

  10. Acquisition of anticancer drug resistance is partially associated with cancer stemness in human colon cancer cells.

    PubMed

    El Khoury, Flaria; Corcos, Laurent; Durand, Stéphanie; Simon, Brigitte; Le Jossic-Corcos, Catherine

    2016-12-01

    Colorectal cancer (CRC) is one of the most aggressive cancers worldwide. Several anticancer agents are available to treat CRC, but eventually cancer relapse occurs. One major cause of chemotherapy failure is the emergence of drug-resistant tumor cells, suspected to originate from the stem cell compartment. The aim of this study was to ask whether drug resistance was associated with the acquisition of stem cell-like properties. We isolated drug-resistant derivatives of two human CRC cell lines, HT29 and HCT116, using two anticancer drugs with distinct modes of action, oxaliplatin and docetaxel. HT29 cells resistant to oxaliplatin and both HT29 and HCT116 cells resistant to docetaxel were characterized for their expression of genes potentially involved in drug resistance, cell growth and cell division, and by surveying stem cell-like phenotypic traits, including marker genes, the ability to repair cell-wound and to form colonospheres. Among the genes involved in platinum or taxane resistance (MDR1, ABCG2, MRP2 or ATP7B), MDR1 was uniquely overexpressed in all the resistant cells. An increase in the cyclin-dependent kinase inhibitor p21, in cyclin D1 and in CD26, CD166 cancer stem cell markers, was noted in the resistant cells, together with a higher ability to form larger and more abundant colonospheres. However, many phenotypic traits were selectively altered in either HT29- or in HCT116-resistant cells. Expression of EPHB2, ITGβ-1 or Myc was specifically increased in the HT29-resistant cells, whereas only HCT116-resistant cells efficiently repaired cell- wounds. Taken together, our results show that human CRC cells selected for their resistance to anticancer drugs displayed a few stem cell characteristics, a small fraction of which was shared between cell lines. The occurrence of marked phenotypic differences between HT29- and HCT116-drug resistant cells indicates that the acquired resistance depends mostly on the parental cell characteristics, rather than on the

  11. Sequential combination of k-t principle component analysis (PCA) and partial parallel imaging: k-t PCA GROWL.

    PubMed

    Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun

    2017-03-01

    k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Clinical Assessment of Standard and Generalized Autocalibrating Partially Parallel Acquisition Diffusion Imaging: Effects of Reduction Factor and Spatial Resolution

    PubMed Central

    Andre, J.B.; Zaharchuk, G.; Fischbein, N.J.; Augustin, M.; Skare, S.; Straka, M.; Rosenberg, J.; Lansberg, M.G.; Kemp, S.; Wijman, C.A.C.; Albers, G.W.; Schwartz, N.E.; Bammer, R.

    2012-01-01

    BACKGROUND AND PURPOSE PI improves routine EPI-based DWI by enabling higher spatial resolution and reducing geometric distortion, though it remains unclear which of these is most important. We evaluated the relative contribution of these factors and assessed their ability to increase lesion conspicuity and diagnostic confidence by using a GRAPPA technique. MATERIALS AND METHODS Four separate DWI scans were obtained at 1.5T in 48 patients with independent variation of in-plane spatial resolution (1.88 mm2 versus 1.25 mm2) and/or reduction factor (R = 1 versus R = 3). A neuroradiologist with access to clinical history and additional imaging sequences provided a reference standard diagnosis for each case. Three blinded neuroradiologists assessed scans for abnormalities and also evaluated multiple imaging-quality metrics by using a 5-point ordinal scale. Logistic regression was used to determine the impact of each factor on subjective image quality and confidence. RESULTS Reference standard diagnoses in the patient cohort were acute ischemic stroke (n = 30), ischemic stroke with hemorrhagic conversion (n = 4), intraparenchymal hemorrhage (n = 9), or no acute lesion (n = 5). While readers preferred both a higher reduction factor and a higher spatial resolution, the largest effect was due to an increased reduction factor (odds ratio, 47 ± 16). Small lesions were more confidently discriminated from artifacts on R = 3 images. The diagnosis changed in 5 of 48 scans, always toward the reference standard reading and exclusively for posterior fossa lesions. CONCLUSIONS PI improves DWI primarily by reducing geometric distortion rather than by increasing spatial resolution. This outcome leads to a more accurate and confident diagnosis of small lesions. PMID:22403781

  13. [High-resolution functional cardiac MR imaging using density-weighted real-time acquisition and a combination of compressed sensing and parallel imaging for image reconstruction].

    PubMed

    Wech, T; Gutberlet, M; Greiser, A; Stäb, D; Ritter, C O; Beer, M; Hahn, D; Köstler, H

    2010-08-01

    The aim of this study was to perform high-resolution functional MR imaging using accelerated density-weighted real-time acquisition (DE) and a combination of compressed sensing (CO) and parallel imaging for image reconstruction. Measurements were performed on a 3 T whole-body system equipped with a dedicated 32-channel body array coil. A one-dimensional density-weighted spin warp technique was used, i. e. non-equidistant phase encoding steps were acquired. The two acceleration techniques, compressed sensing and parallel imaging, were performed subsequently. From a complete Cartesian k-space, a four-fold uniformly undersampled k-space was created. In addition, each undersampled time frame was further undersampled by an additional acceleration factor of 2.1 using an individual density-weighted undersampling pattern for each time frame. Simulations were performed using data of a conventional human in-vivo cine examination and in-vivo measurements of the human heart were carried out employing an adapted real-time sequence. High-quality DECO real-time images using parallel acquisition of the function of the human heart could be acquired. An acceleration factor of 8.4 could be achieved making it possible to maintain the high spatial and temporal resolution without significant noise enhancement. DECO parallel imaging facilitates high acceleration factors, which allows real-time MR acquisition of the heart dynamics and function with an image quality comparable to that conventionally achieved with clinically established triggered cine imaging. Georg Thieme Verlag KG Stuttgart, New York.

  14. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC (Superconducting Super Collider) detectors

    SciTech Connect

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C. ); Lockyer, N.; VanBerg, R. )

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab.

  15. A randomized, double-blind, placebo-controlled, parallel-group study of rufinamide as adjunctive therapy for refractory partial-onset seizures.

    PubMed

    Biton, Victor; Krauss, Gregory; Vasquez-Santana, Blanca; Bibbiani, Francesco; Mann, Allison; Perdomo, Carlos; Narurkar, Milind

    2011-02-01

    Efficacy and safety of adjunctive rufinamide (3,200 mg/day) was assessed in adolescents and adults with inadequately controlled partial-onset seizures receiving maintenance therapy with up to three antiepileptic drugs (AEDs). This randomized, double-blind, placebo-controlled, parallel-group, multicenter study comprised a 56-day baseline phase (BP), 12-day titration phase, and 84-day maintenance phase (MP). The primary efficacy variable was percentage change in total partial seizure frequency per 28 days (MP vs. BP). Secondary efficacy outcome measures included ≥50% responder rate and reduction in mean total partial seizure frequency during the MP. Safety and tolerability evaluation included adverse events (AEs), physical and neurologic examinations, and laboratory values. Pharmacokinetic and pharmacodynamic assessments were conducted. Three hundred fifty-seven patients were randomized: 176 to rufinamide and 181 to placebo. Patients had a median of 13.3 seizures per 28 days during BP; 86% were receiving ≥2 AEDs. For the intent-to-treat population, the median percentage reduction in total partial seizure frequency per 28 days was 23.25 for rufinamide versus 9.80 for placebo (p = 0.007). Rufinamide-treated patients were more than twice as likely to have had a ≥50% reduction in partial seizure frequency (32.5% vs. 14.3%; p < 0.001) and had a greater reduction in median total partial seizure rate per 28 days during the MP (13.2 vs. 5.2; p < 0.001). Treatment-emergent AEs occurring at ≥5% higher incidence in the rufinamide group compared with placebo were dizziness, fatigue, nausea, somnolence, and diplopia. Adjunctive treatment with rufinamide reduced total partial seizures in refractory patients. AEs reported were consistent with the known tolerability profile of rufinamide. Wiley Periodicals, Inc. © 2010 International League Against Epilepsy.

  16. Comparative Analysis on the Performance of a Short String of Series-Connected and Parallel-Connected Photovoltaic Array Under Partial Shading

    NASA Astrophysics Data System (ADS)

    Vijayalekshmy, S.; Rama Iyer, S.; Beevi, Bisharathu

    2015-09-01

    The output power from the photovoltaic (PV) array decreases and the array exhibit multiple peaks when it is subjected to partial shading (PS). The power loss in the PV array varies with the array configuration, physical location and the shading pattern. This paper compares the relative performance of a PV array consisting of a short string of three PV modules for two different configurations. The mismatch loss, shading loss, fill factor and the power loss due to the failure in tracking of the global maximum power point, of a series string with bypass diodes and short parallel string are analysed using MATLAB/Simulink model. The performance of the system is investigated for three different conditions of solar insolation for the same shading pattern. Results indicate that there is considerable power loss due to shading in a series string during PS than in a parallel string with same number of modules.

  17. Experimental study on heat transfer enhancement of laminar ferrofluid flow in horizontal tube partially filled porous media under fixed parallel magnet bars

    NASA Astrophysics Data System (ADS)

    Sheikhnejad, Yahya; Hosseini, Reza; Saffar Avval, Majid

    2017-02-01

    In this study, steady state laminar ferroconvection through circular horizontal tube partially filled with porous media under constant heat flux is experimentally investigated. Transverse magnetic fields were applied on ferrofluid flow by two fixed parallel magnet bar positioned on a certain distance from beginning of the test section. The results show promising notable enhancement in heat transfer as a consequence of partially filled porous media and magnetic field, up to 2.2 and 1.4 fold enhancement were observed in heat transfer coefficient respectively. It was found that presence of both porous media and magnetic field simultaneously can highly improve heat transfer up to 2.4 fold. Porous media of course plays a major role in this configuration. Virtually, application of Magnetic field and porous media also insert higher pressure loss along the pipe which again porous media contribution is higher that magnetic field.

  18. Morphological Awareness in Vocabulary Acquisition among Chinese-Speaking Children: Testing Partial Mediation via Lexical Inference Ability

    ERIC Educational Resources Information Center

    Zhang, Haomin

    2015-01-01

    The goal of this study was to investigate the effect of Chinese-specific morphological awareness on vocabulary acquisition among young Chinese-speaking students. The participants were 288 Chinese-speaking second graders from three different cities in China. Multiple regression analysis and mediation analysis were used to uncover the mediated and…

  19. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever one or more of the contributors in the formation of a joint venture or other corporation which otherwise...

  20. Morphological Awareness in Vocabulary Acquisition among Chinese-Speaking Children: Testing Partial Mediation via Lexical Inference Ability

    ERIC Educational Resources Information Center

    Zhang, Haomin

    2015-01-01

    The goal of this study was to investigate the effect of Chinese-specific morphological awareness on vocabulary acquisition among young Chinese-speaking students. The participants were 288 Chinese-speaking second graders from three different cities in China. Multiple regression analysis and mediation analysis were used to uncover the mediated and…

  1. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever...

  2. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever...

  3. Reducing Contrast Contamination in Radial Turbo-Spin-Echo Acquisitions by Combining a Narrow-Band KWIC Filter With Parallel Imaging

    PubMed Central

    Neumann, Daniel; Breuer, Felix A.; Völker, Michael; Brandt, Tobias; Griswold, Mark A.; Jakob, Peter M.; Blaimer, Martin

    2014-01-01

    Purpose Cartesian turbo spin-echo (TSE) and radial TSE images are usually reconstructed by assembling data containing different contrast information into a single k-space. This approach results in mixed contrast contributions in the images, which may reduce their diagnostic value. The goal of this work is to improve the image contrast from radial TSE acquisitions by reducing the contribution of signals with undesired contrast information. Methods Radial TSE acquisitions allow the reconstruction of multiple images with different T2 contrasts using the k-space weighted image contrast (KWIC) filter. In this work, the image contrast is improved by reducing the band-width of the KWIC filter. Data for the reconstruction of a single image are selected from within a small temporal range around the desired echo time. The resulting data set is undersampled and therefore an iterative parallel imaging algorithm is applied to remove aliasing artifacts. Results Radial TSE images of the human brain reconstructed with the proposed method show an improved contrast when compared to Cartesian TSE images or radial TSE images with conventional KWIC reconstructions. Conclusion The proposed method provides multi-contrast images from radial TSE data with contrasts similar to multi spin-echo images. Contaminations from unwanted contrast weightings are strongly reduced. PMID:24436227

  4. A Performance Comparison of the Parallel Preconditioners for Iterative Methods for Large Sparse Linear Systems Arising from Partial Differential Equations on Structured Grids

    NASA Astrophysics Data System (ADS)

    Ma, Sangback

    In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The

  5. RH 1.5D: a massively parallel code for multi-level radiative transfer with partial frequency redistribution and Zeeman polarisation

    NASA Astrophysics Data System (ADS)

    Pereira, Tiago M. D.; Uitenbroek, Han

    2015-02-01

    The emergence of three-dimensional magneto-hydrodynamic simulations of stellar atmospheres has sparked a need for efficient radiative transfer codes to calculate detailed synthetic spectra. We present RH 1.5D, a massively parallel code based on the RH code and capable of performing Zeeman polarised multi-level non-local thermodynamical equilibrium calculations with partial frequency redistribution for an arbitrary amount of chemical species. The code calculates spectra from 3D, 2D or 1D atmospheric models on a column-by-column basis (or 1.5D). While the 1.5D approximation breaks down in the cores of very strong lines in an inhomogeneous environment, it is nevertheless suitable for a large range of scenarios and allows for faster convergence with finer control over the iteration of each simulation column. The code scales well to at least tens of thousands of CPU cores, and is publicly available. In the present work we briefly describe its inner workings, strategies for convergence optimisation, its parallelism, and some possible applications.

  6. A 24-week multicenter, randomized, double-blind, parallel-group, dose-ranging study of rufinamide in adults and adolescents with inadequately controlled partial seizures.

    PubMed

    Elger, Christian E; Stefan, Hermann; Mann, Allison; Narurkar, Milind; Sun, Yijun; Perdomo, Carlos

    2010-02-01

    To assess the efficacy, safety, tolerability, and pharmacokinetics of adjunctive rufinamide in adults and adolescents with inadequately controlled partial seizures receiving treatment with one to three concomitant antiepileptic drugs (AEDs). A 24-week multicenter Phase II clinical study was conducted (n=647), comprising a 12-week prospective baseline phase and a 12-week randomized double-blind, parallel-group, five-arm (placebo and rufinamide 200, 400, 800, and 1600mg/day) treatment phase. The linear trend of dose response for seizure frequency per 28 days in the double-blind treatment phase - the primary efficacy outcome measure - was statistically significant in favor of rufinamide (estimated slope=-0.049, P=0.003; minimally efficacious dose, 400mg/day). Response rates, defined as a >or=50% reduction in seizure frequency per 28 days, also revealed a significant linear trend of dose response (P=0.0019, logistic regression analysis). Adverse events were comparable between placebo and all rufinamide groups except the 1600mg/day group; no safety signals were observed. These results suggest that in the dose range of 400-1600mg/day, add-on rufinamide therapy may benefit patients with inadequately controlled partial seizures and is generally well tolerated. These data also suggest that higher doses may confer additional efficacy without adversely affecting safety and tolerability.

  7. Measured count-rate performance of the Discovery STE PET/CT scanner in 2D, 3D and partial collimation acquisition modes.

    PubMed

    Macdonald, L R; Schmitz, R E; Alessio, A M; Wollenweber, S D; Stearns, C W; Ganin, A; Harrison, R L; Lewellen, T K; Kinahan, P E

    2008-07-21

    We measured count rates and scatter fraction on the Discovery STE PET/CT scanner in conventional 2D and 3D acquisition modes, and in a partial collimation mode between 2D and 3D. As part of the evaluation of using partial collimation, we estimated global count rates using a scanner model that combined computer simulations with an empirical live-time function. Our measurements followed the NEMA NU2 count rate and scatter-fraction protocol to obtain true, scattered and random coincidence events, from which noise equivalent count (NEC) rates were calculated. The effect of patient size was considered by using 27 cm and 35 cm diameter phantoms, in addition to the standard 20 cm diameter cylindrical count-rate phantom. Using the scanner model, we evaluated two partial collimation cases: removing half of the septa (2.5D) and removing two-thirds of the septa (2.7D). Based on predictions of the model, a 2.7D collimator was constructed. Count rates and scatter fractions were then measured in 2D, 2.7D and 3D. The scanner model predicted relative NEC variation with activity, as confirmed by measurements. The measured 2.7D NEC was equal or greater than 3D NEC for all activity levels in the 27 cm and 35 cm phantoms. In the 20 cm phantom, 3D NEC was somewhat higher ( approximately 15%) than 2.7D NEC at 100 MBq. For all higher activity concentrations, 2.7D NEC was greater and peaked 26% above the 3D peak NEC. The peak NEC in 2.7D mode occurred at approximately 425 MBq, and was 26-50% greater than the peak 3D NEC, depending on object size. NEC in 2D was considerably lower, except at relatively high activity concentrations. Partial collimation shows promise for improved noise equivalent count rates in clinical imaging without altering other detector parameters.

  8. Immediate versus delayed loading of strategic mini dental implants for the stabilization of partial removable dental prostheses: a patient cluster randomized, parallel-group 3-year trial.

    PubMed

    Mundt, Torsten; Al Jaghsi, Ahmad; Schwahn, Bernd; Hilgert, Janina; Lucas, Christian; Biffar, Reiner; Schwahn, Christian; Heinemann, Friedhelm

    2016-07-30

    Acceptable short-term survival rates (>90 %) of mini-implants (diameter < 3.0 mm) are only documented for mandibular overdentures. Sound data for mini-implants as strategic abutments for a better retention of partial removable dental prosthesis (PRDP) are not available. The purpose of this study is to test the hypothesis that immediately loaded mini-implants show more bone loss and less success than strategic mini-implants with delayed loading. In this four-center (one university hospital, three dental practices in Germany), parallel-group, controlled clinical trial, which is cluster randomized on patient level, a total of 80 partially edentulous patients with unfavourable number and distribution of remaining abutment teeth in at least one jaw will receive supplementary min-implants to stabilize their PRDP. The mini-implant are either immediately loaded after implant placement (test group) or delayed after four months (control group). Follow-up of the patients will be performed for 36 months. The primary outcome is the radiographic bone level changes at implants. The secondary outcome is the implant success as a composite variable. Tertiary outcomes include clinical, subjective (quality of life, satisfaction, chewing ability) and dental or technical complications. Strategic implants under an existing PRDP are only documented for standard-diameter implants. Mini-implants could be a minimal invasive and low cost solution for this treatment modality. The trial is registered at Deutsches Register Klinischer Studien (German register of clinical trials) under DRKS-ID: DRKS00007589 ( www.germanctr.de ) on January 13(th), 2015.

  9. An open, parallel, randomized, comparative, multicenter investigation evaluating the efficacy and tolerability of Mepilex Ag versus silver sulfadiazine in the treatment of deep partial-thickness burn injuries.

    PubMed

    Tang, Hongtai; Lv, Guozhong; Fu, Jinfeng; Niu, Xihua; Li, Yeyang; Zhang, Mei; Zhang, Guoʼan; Hu, Dahai; Chen, Xiaodong; Lei, Jin; Qi, Hongyan; Xia, Zhaofan

    2015-05-01

    Partial-thickness burns are among the most frequently encountered types of burns, and numerous dressing materials are available for their treatment. A multicenter, open, randomized, and parallel study was undertaken to determine the efficacy and tolerability of silver sulfadiazine (SSD) compared with an absorbent foam silver dressing, Mepilex Ag, on patients aged between 5 years and 65 years with deep partial-thickness thermal burn injuries (2.5-25% total body surface area). Patients were randomly assigned to either SSD (n = 82) applied daily or a Mepilex Ag dressing (n = 71) applied every 5 days to 7 days. The treatment period was up to 4 weeks. There was no significant difference between the two treatment groups with respect to the primary end point of time to healing, which occurred in 56 (79%) of 71 patients after a median follow-up time of 15 days in the Mepilex Ag group compared with 65 (79%) of 82 patients after a median follow-up time of 16 days in the SSD group (p = 0.74). There was also no significant difference in the percentage of study burn healed. Patients in the Mepilex Ag group had 87.1% of their study burn healed (out of the total burn area) compared with 85.2% of patients in the SSD group. However, the mean total number of dressings used was significantly more in the SSD group (14.0) compared with the Mepilex Ag group (3.06, p < 0.0001). There was no significant difference in the time until skin graft was performed between the two study groups. There was no difference in healing rates between Mepilex Ag and SSD, with both products well tolerated. The longer wear time of Mepilex Ag promotes undisturbed healing and makes it easier for patients to continue with their normal lives sooner. Therapeutic study, level III.

  10. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  11. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  12. Solitary Sound Play during Acquisition of English Vocalizations by an African Grey Parrot (Psittacus Erithacus): Possible Parallels with Children's Monologue Speech.

    ERIC Educational Resources Information Center

    Pepperberg, Irene M.; And Others

    1991-01-01

    Examines one component of an African Grey parrot's monologue behavior, private speech, while he was being taught new vocalizations. The data are discussed in terms of the possible functions of monologues during the parrot's acquisition of novel vocalizations. (85 references) (GLR)

  13. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays.

    PubMed

    Sodickson, D K; Manning, W J

    1997-10-01

    SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.

  14. Overlapping MALDI-Mass Spectrometry Imaging for In-Parallel MS and MS/MS Data Acquisition without Sacrificing Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Hansen, Rebecca L.; Lee, Young Jin

    2017-09-01

    Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.

  15. A new numerical method for investigation of thermophoresis and Brownian motion effects on MHD nanofluid flow and heat transfer between parallel plates partially filled with a porous medium

    NASA Astrophysics Data System (ADS)

    Sayehvand, Habib-Olah; Basiri Parsa, Amir

    Numerical investigation the problem of nanofluid heat and mass transfer in a channel partially filled with a porous medium in the presence of uniform magnetic field is carried out by a new computational iterative approach known as the spectral local linearization method (SLLM). The similarity solution is used to reduce the governing system of partial differential equations to a set of nonlinear ordinary differential equations which are then solved by SLLM and validity of our solutions is verified by the numerical results (fourth-order Runge-Kutta scheme with the shooting method). In modeling the flow in the channel, the effects of flow inertia, Brinkman friction, nanoparticles concentration and thickness of the porous region are taken into account. The results are obtained for velocity, temperature, concentration, skin friction, Nusselt number and Sherwood number. Also, effects of active parameters such as viscosity parameter, Hartmann number, Darcy number, Prandtl number, Schmidt number, Eckert number, Brownian motion parameter, thermophoresis parameter and the thickness of porous region on the hydrodynamics, heat and mass transfer behaviors are investigated.

  16. Direct parallel image reconstructions for spiral trajectories using GRAPPA.

    PubMed

    Heidemann, Robin M; Griswold, Mark A; Seiberlich, Nicole; Krüger, Gunnar; Kannengiesser, Stephan A R; Kiefer, Berthold; Wiggins, Graham; Wald, Lawrence L; Jakob, Peter M

    2006-08-01

    The use of spiral trajectories is an efficient way to cover a desired k-space partition in magnetic resonance imaging (MRI). Compared to conventional Cartesian k-space sampling, it allows faster acquisitions and results in a slight reduction of the high gradient demand in fast dynamic scans, such as in functional MRI (fMRI). However, spiral images are more susceptible to off-resonance effects that cause blurring artifacts and distortions of the point-spread function (PSF), and thereby degrade the image quality. Since off-resonance effects scale with the readout duration, the respective artifacts can be reduced by shortening the readout trajectory. Multishot experiments represent one approach to reduce these artifacts in spiral imaging, but result in longer scan times and potentially increased flow and motion artifacts. Parallel imaging methods are another promising approach to improve image quality through an increase in the acquisition speed. However, non-Cartesian parallel image reconstructions are known to be computationally time-consuming, which is prohibitive for clinical applications. In this study a new and fast approach for parallel image reconstructions for spiral imaging based on the generalized autocalibrating partially parallel acquisitions (GRAPPA) methodology is presented. With this approach the computational burden is reduced such that it becomes comparable to that needed in accelerated Cartesian procedures. The respective spiral images with two- to eightfold acceleration clearly benefit from the advantages of parallel imaging, such as enabling parallel MRI single-shot spiral imaging with the off-resonance behavior of multishot acquisitions. Copyright 2006 Wiley-Liss, Inc.

  17. Alternative donor transplantation after reduced intensity conditioning: results of parallel phase 2 trials using partially HLA-mismatched related bone marrow or unrelated double umbilical cord blood grafts

    PubMed Central

    Carter, Shelly L.; Karanes, Chatchada; Costa, Luciano J.; Wu, Juan; Devine, Steven M.; Wingard, John R.; Aljitawi, Omar S.; Cutler, Corey S.; Jagasia, Madan H.; Ballen, Karen K.; Eapen, Mary; O'Donnell, Paul V.

    2011-01-01

    The Blood and Marrow Transplant Clinical Trials Network conducted 2 parallel multicenter phase 2 trials for individuals with leukemia or lymphoma and no suitable related donor. Reduced intensity conditioning (RIC) was used with either unrelated double umbilical cord blood (dUCB) or HLA-haploidentical related donor bone marrow (Haplo-marrow) transplantation. For both trials, the transplantation conditioning regimen incorporated cyclophosphamide, fludarabine, and 200 cGy of total body irradiation. The 1-year probabilities of overall and progression-free survival were 54% and 46%, respectively, after dUCB transplantation (n = 50) and 62% and 48%, respectively, after Haplo-marrow transplantation (n = 50). The day +56 cumulative incidence of neutrophil recovery was 94% after dUCB and 96% after Haplo-marrow transplantation. The 100-day cumulative incidence of grade II-IV acute GVHD was 40% after dUCB and 32% after Haplo-marrow transplantation. The 1-year cumulative incidences of nonrelapse mortality and relapse after dUCB transplantation were 24% and 31%, respectively, with corresponding results of 7% and 45%, respectively, after Haplo-marrow transplantation. These multicenter studies confirm the utility of dUCB and Haplo-marrow as alternative donor sources and set the stage for a multicenter randomized clinical trial to assess the relative efficacy of these 2 strategies. The trials are registered at www.clinicaltrials.gov under NCT00864227 (BMT CTN 0604) and NCT00849147 (BMT CTN 0603). PMID:21527516

  18. 3D time-of-flight MR angiography of the intracranial vessels: optimization of the technique with water excitation, parallel acquisition, eight-channel phased-array head coil and low-dose contrast administration.

    PubMed

    Ozsarlak, O; Van Goethem, J W; Parizel, P M

    2004-11-01

    The aim of this study is three folds: to compare the eight-channel phased-array and standard circularly polarized (CP) head coils in visualisation of the intracranial vessels, to compare the three-dimensional (3D) time-of-flight (TOF) MR angiography (MRA) techniques, and to define the effects of parallel imaging in 3D TOF MRA. Fifteen healthy volunteers underwent 3D TOF MRA of the intracranial vessels using eight-channel phased-array and CP standard head coils. The following MRA techniques were obtained on each volunteer: (1) conventional 3D TOF MRA with magnetization transfer; (2) 3D TOF MRA with water excitation for background suppression; and (3) low-dose (0.5 ml) gadolinium-enhanced 3D TOF MRA with water excitation. Results are demonstrating that water excitation is a valuable background suppression technique, especially when applied with an eight-channel phased-array head coil. For central and proximal portions of the intracranial arteries, unenhanced TOF MRA with water excitation was the best technique. Low-dose contrast enhanced TOF MRA using an eight-channel phased-array head coil is superior in the evaluation of distal branches over the standard CP head coil. Parallel imaging with an acceleration factor of two allows an important time gain without a significant decrease in vessel evaluation. Water excitation allows better background suppression, especially around the orbits and at the periphery, when compared to conventional acquisitions.

  19. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  20. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  1. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  2. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  3. Parallel computation

    NASA Astrophysics Data System (ADS)

    Huberman, Bernardo A.

    1989-11-01

    This paper reviews three different aspects of parallel computation which are useful for physics. The first part deals with special architectures for parallel computing (SIMD and MIMD machines) and their differences, with examples of their uses. The second section discusses the speedup that can be achieved in parallel computation and the constraints generated by the issues of communication and synchrony. The third part describes computation by distributed networks of powerful workstations without global controls and the issues involved in understanding their behavior.

  4. Portfolio Acquisition

    DTIC Science & Technology

    2015-05-14

    pmodigliani@mitre.org (703) 983-9131 14 May 15 Portfolio Acquisition NPS Acquisition Research Symposium 2015 Report Documentation Page Form ApprovedOMB...00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Portfolio Acquisition 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...MITRE Corporation. All rights reserved. Portfolio Acquisition Concept Elevate acquisition elements up to a portfolio structure for speed

  5. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  6. Three-way analysis of the UPLC-PDA dataset for the multicomponent quantitation of hydrochlorothiazide and olmesartan medoxomil in tablets by parallel factor analysis and three-way partial least squares.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren

    2016-01-01

    An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method.

  7. Markedness and Second Language Acquisition.

    ERIC Educational Resources Information Center

    Gil-Byeon, Ja

    1999-01-01

    Discusses whether markedness is at work in second-language acquisition in the same way it is in first-language acquisition when Korean speakers learn English as a second language and English speakers learn Korean as a second language. Results are discussed in terms of no access to universal grammar, partial access to universal grammar, and access…

  8. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  9. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  10. Partial Acquisition of the Formal Operations.

    ERIC Educational Resources Information Center

    Greene, Anita-Louise

    Sixty adolescents, stratified by sex and grade level (i.e., 9th, 12th, and college sophomore) participated in an examination of Piaget's suggestion that the formal operations are prerequisite to the development of political idealism, abstract thought and future time perspective in adolescence. Analysis of the cognition data revealed that the…

  11. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  12. HYPERCP data acquisition system

    SciTech Connect

    Kaplan, D.M.; Luebke, W.R.; Chakravorty, A.

    1997-12-31

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data axe acquired from the front-end digitization systems at {approx} 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of {approx} 1 {mu}s per event, allowing operation at 75-kHz trigger rate with {approx_lt}30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates.

  13. Fast high-spatial-resolution MRI of the ankle with parallel imaging using GRAPPA at 3 T.

    PubMed

    Bauer, Jan Stefan; Banerjee, Suchandrima; Henning, Tobias D; Krug, Roland; Majumdar, Sharmilla; Link, Thomas M

    2007-07-01

    The purpose of our study was to compare an autocalibrating parallel imaging technique at 3 T with standard acquisitions at 3 and 1.5 T for small-field-of-view imaging of the ankle. MRI of the ankle was performed in three fresh human cadaver specimens and three healthy volunteers. Axial and sagittal T1-weighted, axial fat-saturated T2-weighted, and coronal intermediate-weighted fast spin-echo sequences, as well as a fat-saturated spoiled gradient-echo sequence, were acquired at 1.5 and 3 T. At 3 T, reduced data sets were reconstructed using a generalized autocalibrating partially parallel acquisition (GRAPPA) technique, with a scan time reduction of approximately 44%. All images were assessed by two radiologists independently concerning image quality. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured in every data set. In the cadaver specimens, macroscopic findings after dissection served as a reference for the pathologic evaluation. SNR and CNR in the GRAPPA images were comparable to the standard acquisition at 3 T. The image quality was rated significantly higher at 3 T with both normal and parallel acquisition compared with 1.5 T. There was no significant difference in ligament and cartilage visualization or in image quality between standard and GRAPPA reconstruction at 3 T. Ankle abnormalities were better seen at 3 T than at 1.5 T for both normal and parallel acquisitions. Using higher field strength combined with parallel technique, MR images of the ankle were obtained with excellent diagnostic quality and a scan time reduction of about 44%. In addition, parallel imaging can provide more flexibility in protocol design.

  14. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  15. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of the... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Partial set-asides. 219.502-3 Section 219.502-3 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM...

  16. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of the... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Partial set-asides. 219.502-3 Section 219.502-3 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM...

  17. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of the... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Partial set-asides. 219.502-3 Section 219.502-3 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM...

  18. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of the... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Partial set-asides. 219.502-3 Section 219.502-3 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM...

  19. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of the... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Partial set-asides. 219.502-3 Section 219.502-3 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM...

  20. 48 CFR 49.109-5 - Partial settlements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial settlements. 49.109-5 Section 49.109-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.109-5 Partial settlements. The TCO should attempt...

  1. 48 CFR 49.112-1 - Partial payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial payments. 49.112-1 Section 49.112-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.112-1 Partial payments. (a) General. If the contract authorizes...

  2. Second Language Acquisition: Possible Insights from Studies on How Birds Acquire Song.

    ERIC Educational Resources Information Center

    Neapolitan, Denise M.; And Others

    1988-01-01

    Reviews research that demonstrates parallels between general linguistic and cognitive processes in human language acquisition and avian acquisition of song and discusses how such research may provide new insights into the processes of second-language acquisition. (Author/CB)

  3. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  4. Acquisition Policy

    ERIC Educational Resources Information Center

    De Vore, Helen L.

    1970-01-01

    A policy to insure acquisition of primary international libraries' collections for a library system pertaining to the environmental sciences was prepared by a newly formed Technical Processes Section, Environmental Science Services Administration (ESSA). (Author/NH)

  5. SSC/BCD data acquisition system proposal

    SciTech Connect

    Barsotti, E.; Bowden, M.; Swoboda, C.

    1989-04-01

    The proposed new data acquisition system architecture takes event fragments off a detector over fiber optics and to a parallel event building switch. The parallel event building switch concept, taken from the telephone communications industry, along with expected technology improvements in fiber-optic data transmission speeds over the next few years, should allow data acquisition system rates to increase dramatically and exceed those rates needed for the SSC. This report briefly describes the switch architecture and fiber optics for a SSC data acquisition system.

  6. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  7. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  8. Mergers + acquisitions.

    PubMed

    Hoppszallern, Suzanna

    2002-05-01

    The hospital sector in 2001 led the health care field in mergers and acquisitions. Most deals involved a network augmenting its presence within a specific region or in a market adjacent to its primary service area. Analysts expect M&A activity to increase in 2002.

  9. Acquisition strategies

    SciTech Connect

    Zimmer, M.J.; Lynch, P.W. )

    1993-11-01

    Acquiring projects takes careful planning, research and consideration. Picking the right opportunities and avoiding the pitfalls will lead to a more valuable portfolio. This article describes the steps to take in evaluating an acquisition and what items need to be considered in an evaluation.

  10. Parallel processing of natural language

    SciTech Connect

    Chang, H.O.

    1986-01-01

    Two types of parallel natural language processing are studied in this work: (1) the parallelism between syntactic and nonsyntactic processing and (2) the parallelism within syntactic processing. It is recognized that a syntactic category can potentially be attached to more than one node in the syntactic tree of a sentence. Even if all the attachments are syntactically well-formed, nonsyntactic factors such as semantic and pragmatic consideration may require one particular attachment. Syntactic processing must synchronize and communicate with nonsyntactic processing. Two syntactic processing algorithms are proposed for use in a parallel environment: Early's algorithm and the LR(k) algorithm. Conditions are identified to detect the syntactic ambiguity and the algorithms are augmented accordingly. It is shown that by using nonsyntactic information during syntactic processing, backtracking can be reduced, and the performance of the syntactic processor is improved. For the second type of parallelism, it is recognized that one portion of a grammar can be isolated from the rest of the grammar and be processed by a separate processor. A partial grammar of a larger grammar is defined. Parallel syntactic processing is achieved by using two processors concurrently: the main processor (mp) and the two processors concurrently: the main processor (mp) and the auxiliary processor (ap).

  11. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  12. Partial Tonsillectomy.

    PubMed

    Wong, Kevin; Levi, Jessica R

    2017-03-01

    Evaluate the content and readability of health information regarding partial tonsillectomy. A web search was performed using the term partial tonsillectomy in Google, Yahoo!, and Bing. The first 50 websites from each search were evaluated using HONcode standards for quality and content. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease, Gunning-Fog Index, Coleman-Liau Index, Automated Readability Index, and SMOG score. The Freeman-Halton extension of Fisher's exact test was used to compare categorical differences between engines. Less than half of the websites mentioned patient eligibility criteria (43.3%), referenced peer-reviewed literature (43.3%), or provided a procedure description (46.7%). Twenty-two websites (14.7%) were unrelated to partial tonsillectomy, and over half contained advertisements (52%). These finding were consistent across search engines and search terms. The mean FKGL was 11.6 ± 0.11, Gunning-Fog Index was 15.1 ± 0.13, Coleman-Liau Index was 14.6 ± 0.11, ARI was 12.9 ± 0.13, and SMOG grade was 14.0 ± 0.1. All readability levels exceeded the abilities of the average American adult. Current online information regarding partial tonsillectomy may not provide adequate information and may be written at a level too difficult for the average adult reader.

  13. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  14. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  15. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  16. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  17. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  18. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  19. 48 CFR 49.208 - Equitable adjustment after partial termination.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Equitable adjustment after partial termination. 49.208 Section 49.208 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Fixed-Price Contracts...

  20. 48 CFR 49.304 - Procedure for partial termination.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Procedure for partial termination. 49.304 Section 49.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Cost-Reimbursement Contracts...

  1. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  2. A cautionary note regarding drug and brain lesion studies that use swimming pool tasks: partial reinforcement impairs acquisition of place learning in a swimming pool but not on dry land.

    PubMed

    Gonzalez, C L; Kolb, B; Whishaw, I Q

    2000-07-01

    Spatial tasks are used widely in neurobiological studies because it is thought that they provide an unbiased assessment of the integrity of neural structures that mediate spatial learning. For example, in the Morris swimming pool place task, animals are required to locate a hidden platform in a swimming pool in relation to environmental cues. Treatments that result in an animal's failure to find the platform are assumed to reflect defects in the function of neural systems involved in spatial learning. The present study demonstrates, however, that an animal's reinforcement history can contribute to its spatial performance. Animals were trained in the Morris place task with the platform present on 100, 75 or 50% of trials. Relative to the 100% group, the 75% group was impaired in place acquisition, and the 50% group failed to learn. Even placing the 50% group animals onto the platform at the completion of an unsuccessful trial failed to improve acquisition. Animals trained to search for food on an identical dry maze problem were not affected by similar reinforcement schedules. The present findings demonstrate that the Morris swimming pool place task does not provide an unbiased assessment of spatial learning: A treatment effect may be confounded with reinforcement history. The results are discussed in relation to widespread applications of the Morris place task to neurobiological problems.

  3. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    PubMed Central

    Lin, Hong

    2017-01-01

    Generalized autocalibrating partially parallel acquisition (GRAPPA) has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV) model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV) GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data. PMID:28197419

  4. Sparsity-Promoting Calibration for GRAPPA Accelerated Parallel MRI Reconstruction

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2013-01-01

    The amount of calibration data needed to produce images of adequate quality can prevent auto-calibrating parallel imaging reconstruction methods like Generalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) from achieving a high total acceleration factor. To improve the quality of calibration when the number of auto-calibration signal (ACS) lines is restricted, we propose a sparsity-promoting regularized calibration method that finds a GRAPPA kernel consistent with the ACS fit equations that yields jointly sparse reconstructed coil channel images. Several experiments evaluate the performance of the proposed method relative to un-regularized and existing regularized calibration methods for both low-quality and underdetermined fits from the ACS lines. These experiments demonstrate that the proposed method, like other regularization methods, is capable of mitigating noise amplification, and in addition, the proposed method is particularly effective at minimizing coherent aliasing artifacts caused by poor kernel calibration in real data. Using the proposed method, we can increase the total achievable acceleration while reducing degradation of the reconstructed image better than existing regularized calibration methods. PMID:23584259

  5. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2 Fixed-price contracts—partial termination. ...

  6. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  7. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  8. Impact of Reduced k-Space Acquisition on Pathologic Detectability for Volumetric MR Spectroscopic Imaging

    PubMed Central

    Sabati, Mohammad; Zhan, Jiping; Govind, Varan; Arheart, Kristopher L.; Maudsley, Andrew A.

    2013-01-01

    Purpose To assess the impact of accelerated acquisitions on the spectral quality of volumetric MR spectroscopic imaging (MRSI) and to evaluate their ability in detecting metabolic changes with mild injury. Materials and Methods The implementation of a generalized autocalibrating partially parallel acquisition (GRAPPA) method for a high-resolution whole-brain echo planar SI (EPSI) sequence is first described and the spectral accuracy of the GRAPPA-EPSI method is investigated using lobar and voxel-based analyses for normal subjects and patients with mild traumatic brain injuries (mTBI). The performance of GRAPPA was compared with that of fully-encoded EPSI for 5 datasets collected from normal subjects at the same scanning session, as well as on 45 scans (20 normal subjects and 25 mTBI patients) for which the reduced k-space sampling was simulated. For comparison, a central k-space lower-resolution 3D-EPSI acquisition was also simulated. Differences in individual metabolites and metabolite ratio distributions of the mTBI group relative to those of age-matched control subjects were statistically evaluated using analyses divided into hemispheric brain lobes and tissue types. Results GRAPPA-EPSI with 16-min scan time yielded robust and similar results in terms of MRSI quantitation, spectral fitting, and accuracy with that of fully sampled 3D-EPSI acquisitions and was more accurate than central k-space acquisition. Primary findings included high correlations (accuracy of 92.6%) between the GRAPPA and fully sampled results. Conclusion Although the reduced encoding method is associated with lower SNR that impact the quality of spectral analysis the use of parallel imaging method can lead to same diagnostic outcomes as of the fully sampled data when using the sensitivity-limited volumetric MRSI. PMID:23559504

  9. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  10. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  11. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  12. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  13. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  14. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  15. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  16. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  17. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  18. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  19. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  20. Multilist Scheduling. A New Parallel Programming Model.

    DTIC Science & Technology

    1993-07-30

    fluid simulation [531; differential equation solving such as weather prediction [24, 25]; digital circuit simulation such as gate-level simulation [201...Champaign, 1986. [53] Johnson, C. Numerical Solutions of Partial Differential Equations by the Finite Element Method. Cambridge University Press, 1987. 131...Ortega, J. and Voigt, R. Solution of Partial Differential Equations on Vector and Parallel Computers. SIAM Review, vol. 27 (1985), pp. 149-240. [73

  1. Parallel-In-Time For Moving Meshes

    SciTech Connect

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.; Schroder, J. B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  2. An Acquisition Guide for Executives

    EPA Pesticide Factsheets

    This guide covers the following subjects; What is Acquisition?, Purpose and Primary Functions of the Agency’s Acquisition System, Key Organizations in Acquisitions, Legal Framework, Key Players in Acquisitions, Acquisition Process, Acquisition Thresholds

  3. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  4. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  5. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  6. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  7. Introduction to parallel programming

    SciTech Connect

    Brawer, S. )

    1989-01-01

    This book describes parallel programming and all the basic concepts illustrated by examples in a simplified FORTRAN. Concepts covered include: The parallel programming model; The creation of multiple processes; Memory sharing; Scheduling; Data dependencies. In addition, a number of parallelized applications are presented, including a discrete-time, discrete-event simulator, numerical integration, Gaussian elimination, and parallelized versions of the traveling salesman problem and the exploration of a maze.

  8. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  9. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  10. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  11. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  12. Syntax acquisition.

    PubMed

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  13. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  14. Signal Acquisition Using AXIe

    NASA Astrophysics Data System (ADS)

    Narciso, Steven J.

    2011-08-01

    An emerging test and measurement standard called AXIe, AdvancedTCA extensions for Instrumentation, is expected to find wide acceptance within the Physics community as it offers many benefits to applications including shock, plasma, particle and nuclear physics. It is expected that many COTS (commercial off-the-shelf) signal conditioning, acquisition and processing modules will become available from a range of different suppliers. AXIe uses AdvancedTCA® as its basis, but then levers test and measurement industry standards such as PXI, IVI, and LXI to facilitate cooperation and plug-and-play interoperability between COTS instrument suppliers. AXIe's large board footprint and power allows high density in a 19" rack, enabling the development of high-performance signal conditioning, analog-to-digital conversion, and data processing, while offering channel count scalability inherent in modular systems. Synchronization between modules is flexible and provided by two triggering structures: a parallel trigger bus, and radially-distributed, time-matched point-to-point trigger lines. Inter-module communication is also provided with an adjacent module local bus allowing data transfer to 600 Gbits/s in each direction, for example between a front-end digitizer and DSP. AXIe allows embedding high performance computing and a range of COTS AdvancedTCA® computer blades are currently available that provide low cost alternatives to the development of custom signal processing modules. The availability of both LAN and PCI Express allow interconnection between modules, as well as industry-standard high-performance data paths to external host computer systems. AXIe delivers a powerful environment for custom module devel opment. As in the case of VXIbus and PXI before it, commercial development kits are expected to be available. This paper will give an overview of the architectural elements of AXIe 1.0, the compatibility model with AdvancedTCA, and signal acquisition performance of many

  15. Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning

    ERIC Educational Resources Information Center

    Weidemann, Gabrielle; Antees, Cassandra

    2012-01-01

    There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…

  16. Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning

    ERIC Educational Resources Information Center

    Weidemann, Gabrielle; Antees, Cassandra

    2012-01-01

    There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…

  17. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  18. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction.

    PubMed

    Hollingsworth, Kieren Grant

    2015-11-07

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  19. Investigating Second Language Acquisition.

    ERIC Educational Resources Information Center

    Jordens, Peter, Ed.; Lalleman, Josine, Ed.

    Essays in second language acquisition include: "The State of the Art in Second Language Acquisition Research" (Josine Lalleman); "Crosslinguistic Influence with Special Reference to the Acquisition of Grammar" (Michael Sharwood Smith); "Second Language Acquisition by Adult Immigrants: A Multiple Case Study of Turkish and…

  20. Investigating Second Language Acquisition.

    ERIC Educational Resources Information Center

    Jordens, Peter, Ed.; Lalleman, Josine, Ed.

    Essays in second language acquisition include: "The State of the Art in Second Language Acquisition Research" (Josine Lalleman); "Crosslinguistic Influence with Special Reference to the Acquisition of Grammar" (Michael Sharwood Smith); "Second Language Acquisition by Adult Immigrants: A Multiple Case Study of Turkish and…

  1. CMMI-Acquisition Update

    DTIC Science & Technology

    2006-05-04

    Acquisition Module • Air Force Space and Missile Command (SMC) developed a CMMI-A for SMC programs • General Motors and the SEI have initiated a...representatives for the Advisory Group 5 General Motors /SEI Joint Project: The Initial CMMI-Acquisition • Project Goal: Develop an Acquisition Model that...Acquisition Verification Core Processes *based on initial CMMI-Acquisition model developed by General Motors /SEI 7 Pilot Participation • Pilot initial GM

  2. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  3. Languages for parallel architectures

    SciTech Connect

    Bakker, J.W.

    1989-01-01

    This book presents mathematical methods for modelling parallel computer architectures, based on the results of ESPRIT's project 415 on computer languages for parallel architectures. Presented are investigations incorporating a wide variety of programming styles, including functional,logic, and object-oriented paradigms. Topics cover include Philips's parallel object-oriented language POOL, lazy-functional languages, the languages IDEAL, K-LEAF, FP2, and Petri-net semantics for the AADL language.

  4. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  5. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  6. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  7. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  9. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  10. Color Vision Deficits and Literacy Acquisition.

    ERIC Educational Resources Information Center

    Hurley, Sandra Rollins

    1994-01-01

    Shows that color blindness, whether partial or total, inhibits literacy acquisition. Offers a case study of a third grader with impaired color vision. Presents a review of literature on the topic. Notes that people with color vision deficits are often unaware of the handicap. (RS)

  11. Color Vision Deficits and Literacy Acquisition.

    ERIC Educational Resources Information Center

    Hurley, Sandra Rollins

    1994-01-01

    Shows that color blindness, whether partial or total, inhibits literacy acquisition. Offers a case study of a third grader with impaired color vision. Presents a review of literature on the topic. Notes that people with color vision deficits are often unaware of the handicap. (RS)

  12. Acquisition of Three Word Knowledge Aspects through Reading

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2016-01-01

    A number of studies have shown that second or foreign language learners can acquire vocabulary through reading. The aim of the study was to investigate (a) the effects of reading an authentic novel on the acquisition of 3 aspects of word knowledge: spelling, meaning, and collocation; (b) the influence of reading on the acquisition of partial and…

  13. Acquisition of Three Word Knowledge Aspects through Reading

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2016-01-01

    A number of studies have shown that second or foreign language learners can acquire vocabulary through reading. The aim of the study was to investigate (a) the effects of reading an authentic novel on the acquisition of 3 aspects of word knowledge: spelling, meaning, and collocation; (b) the influence of reading on the acquisition of partial and…

  14. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  15. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  16. [First language acquisition research and theories of language acquisition].

    PubMed

    Miller, S; Jungheim, M; Ptok, M

    2014-04-01

    In principle, a child can seemingly easily acquire any given language. First language acquisition follows a certain pattern which to some extent is found to be language independent. Since time immemorial, it has been of interest why children are able to acquire language so easily. Different disciplinary and methodological orientations addressing this question can be identified. A selective literature search in PubMed and Scopus was carried out and relevant monographies were considered. Different, partially overlapping phases can be distinguished in language acquisition research: whereas in ancient times, deprivation experiments were carried out to discover the "original human language", the era of diary studies began in the mid-19th century. From the mid-1920s onwards, behaviouristic paradigms dominated this field of research; interests were focussed on the determination of normal, average language acquisition. The subsequent linguistic period was strongly influenced by the nativist view of Chomsky and the constructivist concepts of Piaget. Speech comprehension, the role of speech input and the relevance of genetic disposition became the centre of attention. The interactionist concept led to a revival of the convergence theory according to Stern. Each of these four major theories--behaviourism, cognitivism, interactionism and nativism--have given valuable and unique impulses, but no single theory is universally accepted to provide an explanation of all aspects of language acquisition. Moreover, it can be critically questioned whether clinicians consciously refer to one of these theories in daily routine work and whether therapies are then based on this concept. It remains to be seen whether or not new theories of grammar, such as the so-called construction grammar (CxG), will eventually change the general concept of language acquisition.

  17. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  18. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  19. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  20. Parallels in History.

    ERIC Educational Resources Information Center

    Mugleston, William F.

    2000-01-01

    Believes that by focusing on the recurrent situations and problems, or parallels, throughout history, students will understand the relevance of history to their own times and lives. Provides suggestions for parallels in history that may be introduced within lectures or as a means to class discussions. (CMK)

  1. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  2. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  3. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  4. Language Acquisition without an Acquisition Device

    ERIC Educational Resources Information Center

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  5. Language Acquisition without an Acquisition Device

    ERIC Educational Resources Information Center

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  6. Partial breast brachytherapy

    MedlinePlus

    ... brachytherapy; Accelerated partial breast irradiation - brachytherapy; Partial breast radiation therapy - brachytherapy; Permanent breast seed implant; PBSI; Low-dose radiotherapy - breast; High-dose radiotherapy - breast; Electronic balloon ...

  7. Surface acquisition through virtual milling

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Surface acquisition deals with the reconstruction of three dimensional objects from a set of data points. The most straightforward techniques require human intervention, a time consuming proposition. It is desirable to develop a fully automated alternative. Such a method is proposed in this paper. It makes use of surface measurements obtained from a 3-D laser digitizer - an instrument which provides the (x,y,z) coordinates of surface data points from various viewpoints. These points are assembled into several partial surfaces using a visibility constraint and a 2-D triangulation technique. Reconstruction of the final object requires merging these partial surfaces. This is accomplished through a procedure that emulates milling, a standard machining operation. From a geometrical standpoint the problem reduces to constructing the intersection of two or more non-convex polyhedra.

  8. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging.

    PubMed

    Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.

  9. Survival of the Partial Reinforcement Extinction Effect after Contextual Shifts

    ERIC Educational Resources Information Center

    Boughner, Robert L.; Papini, Mauricio R.

    2006-01-01

    The effects of contextual shifts on the partial reinforcement extinction effect (PREE) were studied in autoshaping with rats. Experiment 1 established that the two contexts used subsequently were easily discriminable and equally salient. In Experiment 2, independent groups of rats received acquisition training under partial reinforcement (PRF) or…

  10. Survival of the Partial Reinforcement Extinction Effect after Contextual Shifts

    ERIC Educational Resources Information Center

    Boughner, Robert L.; Papini, Mauricio R.

    2006-01-01

    The effects of contextual shifts on the partial reinforcement extinction effect (PREE) were studied in autoshaping with rats. Experiment 1 established that the two contexts used subsequently were easily discriminable and equally salient. In Experiment 2, independent groups of rats received acquisition training under partial reinforcement (PRF) or…

  11. 48 CFR 52.219-7 - Notice of Partial Small Business Set-Aside.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Business Set-Aside. 52.219-7 Section 52.219-7 Federal Acquisition Regulations System FEDERAL ACQUISITION... Clauses 52.219-7 Notice of Partial Small Business Set-Aside. As prescribed in 19.508(d), insert the following clause: Notice of Partial Small Business Set-Aside (JUN 2003) (a) Definitions. Small...

  12. EARLY SYNTACTIC ACQUISITION,

    DTIC Science & Technology

    to the child. And finally, on this active view of language acquisition , it is maintained that sentences are used as data by the acquisition mechanism only when they are to some degree understood. (Author)

  13. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  14. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  15. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2 Fixed...

  16. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2 Fixed...

  17. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2 Fixed...

  18. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  19. A parallel Lanczos method for symmetric generalized eigenvalue problems

    SciTech Connect

    Wu, K.; Simon, H.D.

    1997-12-01

    Lanczos algorithm is a very effective method for finding extreme eigenvalues of symmetric matrices. It requires less arithmetic operations than similar algorithms, such as, the Arnoldi method. In this paper, the authors present their parallel version of the Lanczos method for symmetric generalized eigenvalue problem, PLANSO. PLANSO is based on a sequential package called LANSO which implements the Lanczos algorithm with partial re-orthogonalization. It is portable to all parallel machines that support MPI and easy to interface with most parallel computing packages. Through numerical experiments, they demonstrate that it achieves similar parallel efficiency as PARPACK, but uses considerably less time.

  20. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  1. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  2. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  3. EARLY SYNTACTIC ACQUISITION.

    ERIC Educational Resources Information Center

    KELLEY, K.L.

    THIS PAPER IS A STUDY OF A CHILD'S EARLIEST PRETRANSFORMATIONAL LANGUAGE ACQUISITION PROCESSES. A MODEL IS CONSTRUCTED BASED ON THE ASSUMPTIONS (1) THAT SYNTACTIC ACQUISITION OCCURS THROUGH THE TESTING OF HYPOTHESES REFLECTING THE INITIAL STRUCTURE OF THE ACQUISITION MECHANISM AND THE LANGUAGE DATA TO WHICH THE CHILD IS EXPOSED, AND (2) THAT…

  4. Analysis of an antijam FH acquisition scheme

    NASA Astrophysics Data System (ADS)

    Miller, Leonard E.; Lee, Jhong S.; French, Robert H.; Torrieri, Don J.

    1992-01-01

    An easily implemented matched filter scheme for acquiring hopping code synchronization of incoming frequency-hopping (FH) signals is analyzed, and its performance is evaluated for two types of jamming: partial-band noise jamming and partial-band multitone jamming. The system is designed to reduce jammer-induced false alarms. The system's matched-filter output is compared to an adaptive threshold that is derived from a measurement of the number of acquisition channels being jammed. Example performance calculations are given for the frequency coverage of the jamming either fixed over the entire acquisition period or hopped, that is, changed for each acquisition pulse. It is shown that the jammer's optimum strategy (the worst case) is to maximize the false-alarm probability without regard for the effect on detection probability, for both partial-band noise and multi-tone jamming. It is also shown that a significantly lower probability of false acquisition results from using an adaptive matched-filter threshold, demonstrating that the strategy studied here is superior to conventional nonadaptive threshold schemes.

  5. Software Acquisition: Evolution, Total Quality Management, and Applications to the Army Tactical Missile System.

    DTIC Science & Technology

    presents the concept of software Total Quality Management (TQM) which focuses on the entire process of software acquisition, as a partial solution to...software TQM can be applied to software acquisition. Software Development, Software Acquisition, Total Quality management (TQM), Army Tactical Missile

  6. 78 FR 61113 - Acquisition Process: Task and Delivery Order Contracts, Bundling, Consolidation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... business becomes other than small (e.g., due to a merger or acquisition), it must be ``off ramped.'' With... partial set-aside multiple award contract becomes other than small as a result of a merger or acquisition... a long term contract (i.e., the contract exceeds five years) or there is a merger, acquisition, or...

  7. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  8. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  9. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  10. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  11. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  12. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  13. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  14. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  15. Low cost instrumentation: Parallel port analog to digital converter

    NASA Astrophysics Data System (ADS)

    Dierking, Matthew P.

    1993-02-01

    The personal computer (PC) has become a powerful and cost effective computing platform for use in the laboratory and industry. This Technical Memorandum presents the use of the PC parallel port adapter to implement a low cost analog to digital converter for general purpose instrumentation and automated data acquisition.

  16. Detecting opportunities for parallel observations on the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Lucks, Michael

    1992-01-01

    The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.

  17. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  18. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  19. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  20. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  1. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  2. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1997-05-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, the authors describe these different parallel algorithms and report on computational experiments that they have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. The authors focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but they also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional fast Fourier transforms (FFTs) and other parallel transforms.

  3. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  4. Partial tooth gear bearings

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    2010-01-01

    A partial gear bearing including an upper half, comprising peak partial teeth, and a lower, or bottom, half, comprising valley partial teeth. The upper half also has an integrated roller section between each of the peak partial teeth with a radius equal to the gear pitch radius of the radially outwardly extending peak partial teeth. Conversely, the lower half has an integrated roller section between each of the valley half teeth with a radius also equal to the gear pitch radius of the peak partial teeth. The valley partial teeth extend radially inwardly from its roller section. The peak and valley partial teeth are exactly out of phase with each other, as are the roller sections of the upper and lower halves. Essentially, the end roller bearing of the typical gear bearing has been integrated into the normal gear tooth pattern.

  5. Partial (focal) seizure

    MedlinePlus

    ... Jacksonian seizure; Seizure - partial (focal); Temporal lobe seizure; Epilepsy - partial seizures ... Abou-Khalil BW, Gallagher MJ, Macdonald RL. Epilepsies. In: Daroff ... Practice . 7th ed. Philadelphia, PA: Elsevier; 2016:chap 101. ...

  6. Excessive Acquisition in Hoarding

    PubMed Central

    Frost, Randy O.; Tolin, David F.; Steketee, Gail; Fitch, Kristin E.; Selbo-Bruns, Alexandra

    2009-01-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms. PMID:19261435

  7. Excessive acquisition in hoarding.

    PubMed

    Frost, Randy O; Tolin, David F; Steketee, Gail; Fitch, Kristin E; Selbo-Bruns, Alexandra

    2009-06-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an Internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms.

  8. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  9. Collisionless parallel shocks

    SciTech Connect

    Khabibrakhmanov, I.K. ); Galeev, A.A.; Galinsky, V.L. )

    1993-02-01

    A collisionless parallel shock model is presented which is based on solitary-type solutions of the modified derivative nonlinear Schrodinger equation (MDNLS) for parallel Alfven waves. We generalize the standard derivative nonlinear Schrodinger equation in order to include the possible anisotropy of the plasma distribution function and higher-order Korteweg-de Vies type dispersion. Stationary solutions of MDNLS are discussed. The new mechanism, which can be called [open quote]adiabatic[close quote] of ion reflection from the magnetic mirror of the parallel shock structure is the natural and essential feature of the parallel shock that introduces the irreversible properties into the nonlinear wave structure and may significantly contribute to the plasma heating upstream as well as downstream of the shock. The anisotropic nature of [open quotes]adiabatic[close quotes] reflections leads to the asymmetric particle distribution in the upstream as well in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves which can significantly contribute to the shock thermalization. The number of adiabaticaly reflected ions define the threshold conditions of the fire-hose and mirror type instabilities in the downstream and upstream regions and thus determine a parameter region in which the described laminar parallel shock structure can exist. The threshold conditions for the fire hose and mirror-type instabilities in the downstream and upstream regions of the shock are defined by the number of reflected particles and thus determine a parameter region in which the described laminar parallel shock structure can exist. 29 refs., 4 figs.

  10. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  11. Ion parallel closures

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Lee, Hankyu Q.; Held, Eric D.

    2017-02-01

    Ion parallel closures are obtained for arbitrary atomic weights and charge numbers. For arbitrary collisionality, the heat flow and viscosity are expressed as kernel-weighted integrals of the temperature and flow-velocity gradients. Simple, fitted kernel functions are obtained from the 1600 parallel moment solution and the asymptotic behavior in the collisionless limit. The fitted kernel parameters are tabulated for various temperature ratios of ions to electrons. The closures can be used conveniently without solving the kinetic equation or higher order moment equations in closing ion fluid equations.

  12. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  13. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  14. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  15. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  16. FTMP data acquisition environment

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1988-01-01

    The Fault-Tolerant Multi-Processing (FTMP) test-bed data acquisition environment is described. The performance of two data acquisition devices available in the test environment are estimated and compared. These estimated data rates are used as measures of the devices' capabilities. A new data acquisition device was developed and added to the FTMP environment. This path increases the data rate available by approximately a factor of 8, to 379 KW/S, while simplifying the experiment development process.

  17. Software Acquisition Program Dynamics

    DTIC Science & Technology

    2011-10-24

    acquisition and development of software-reliant systems . Novak has more than 25 years of experience with real-time embedded software product development...Problem Poor acquisition program performance inhibits military performance by depriving the warfighter of critical systems to achieve mission...objectives • Delayed systems withhold needed capabilities • Wasted resources drain funding needed for new systems Acquisitions fail for both technical

  18. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  19. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  20. Streamlined acquisition handbook

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA has always placed great emphasis on the acquisition process, recognizing it as among its most important activities. This handbook is intended to facilitate the application of streamlined acquisition procedures. The development of these procedures reflects the efforts of an action group composed of NASA Headquarters and center acquisition professionals. It is the intent to accomplish the real change in the acquisition process as a result of this effort. An important part of streamlining the acquisition process is a commitment by the people involved in the process to accomplishing acquisition activities quickly and with high quality. Too often we continue to accomplish work in 'the same old way' without considering available alternatives which would require no changes to regulations, approvals from Headquarters, or waivers of required practice. Similarly, we must be sensitive to schedule opportunities throughout the acquisition cycle, not just once the purchase request arrives at the procurement office. Techniques that have been identified as ways of reducing acquisition lead time while maintaining high quality in our acquisition process are presented.

  1. The Acquisition Program

    ERIC Educational Resources Information Center

    Harris, Larry A.

    1968-01-01

    Describes sources from which the Educational Resources Information Center (ERIC) Clearinghouse on Reading obtains materials and discusses the criteria by which these materials are selected for acquisition. (MD)

  2. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  3. Parallel Total Energy

    SciTech Connect

    Wang, Lin-Wang

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  4. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, Michael

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  6. [The parallel saw blade].

    PubMed

    Mühldorfer-Fodor, M; Hohendorff, B; Prommersberger, K-J; van Schoonhoven, J

    2011-04-01

    For shortening osteotomy, two exactly parallel osteotomies are needed to assure a congruent adaption of the shortened bone after segment resection. This is required for regular bone healing. In addition, it is difficult to shorten a bone to a precise distance using an oblique segment resection. A mobile spacer between two saw blades keeps the distance of the blades exactly parallel during an osteotomy cut. The parallel saw blades from Synthes® are designed for 2, 2.5, 3, 4, and 5 mm shortening distances. Two types of blades are available (e.g., for transverse or oblique osteotomies) to assure precise shortening. Preoperatively, the desired type of osteotomy (transverse or oblique) and the shortening distance has to be determined. Then, the appropriate parallel saw blade is chosen, which is compatible to Synthes® Colibri with an oscillating saw attachment. During the osteotomy cut, the spacer should be kept as close to the bone as possible. Excessive force that may deform the blades should be avoided. Before manipulating the bone ends, it is important to determine that the bone is completely dissected by both saw blades to prevent fracturing of the corticalis with bony spurs. The shortening osteotomy is mainly fixated by plate osteosynthesis. For compression of the bone ends, the screws should be placed eccentrically in the plate holes. For an oblique osteotomy, an additional lag screw should be used.

  7. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  8. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  9. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  10. Progress in parallelizing XOOPIC

    NASA Astrophysics Data System (ADS)

    Mardahl, Peter; Verboncoeur, J. P.

    1997-11-01

    XOOPIC (Object Orient Particle in Cell code for X11-based Unix workstations) is presently a serial 2-D 3v particle-in-cell plasma simulation (J.P. Verboncoeur, A.B. Langdon, and N.T. Gladd, ``An object-oriented electromagnetic PIC code.'' Computer Physics Communications 87 (1995) 199-211.). The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library is used to enable the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by 'virtual boundaries', objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed.

  11. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  13. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  14. Resource-Bounded Information Acquisition and Learning

    DTIC Science & Technology

    2012-05-01

    perform on similar tasks. The basic formulation of RBIE as an MDP opens up many interesting avenues of research . Use of TD q-learning is one of the first...2012 2. REPORT TYPE 3. DATES COVERED 4. TITLE AND SUBTITLE Resouce -Bounded Information Acquisition And Learning 5a. CONTRACT NUMBER 5b...enabling a student in any part of the world find the right research advisor - the right ‘Guru.’ Soon after that project (and partially, because of it

  15. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  16. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  17. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  18. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  19. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  20. 48 CFR 552.219-70 - Allocation of Orders-Partially Set-Aside Items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-Partially Set-Aside Items. 552.219-70 Section 552.219-70 Federal Acquisition Regulations System GENERAL... and Clauses 552.219-70 Allocation of Orders—Partially Set-Aside Items. As prescribed in 519.508, insert the following clause: Allocation of Orders—Partially Set-Aside Items (SEP 1999) Where the...

  1. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  2. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  3. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  4. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  5. Coring Sample Acquisition Tool

    NASA Technical Reports Server (NTRS)

    Haddad, Nicolas E.; Murray, Saben D.; Walkemeyer, Phillip E.; Badescu, Mircea; Sherrit, Stewart; Bao, Xiaoqi; Kriechbaum, Kristopher L.; Richardson, Megan; Klein, Kerry J.

    2012-01-01

    A sample acquisition tool (SAT) has been developed that can be used autonomously to sample drill and capture rock cores. The tool is designed to accommodate core transfer using a sample tube to the IMSAH (integrated Mars sample acquisition and handling) SHEC (sample handling, encapsulation, and containerization) without ever touching the pristine core sample in the transfer process.

  6. Acquisitions in 1971

    ERIC Educational Resources Information Center

    Fristoe, Ashby J.; Myers, Rose E.

    1972-01-01

    To acquisitions librarians and book dealers 1971 will be remembered as the year of the budget cut. The recession and decreased funding had an adverse effect on libraries of all types. Acquisitions articles, other than notices of fund cuts, were in short supply in the 1971 library journal literature. (15 references) (Author)

  7. Parallel on-axis holographic phase microscopy of biological cells and unicellular microorganism dynamics.

    PubMed

    Shaked, Natan T; Newpher, Thomas M; Ehlers, Michael D; Wax, Adam

    2010-05-20

    We apply a wide-field quantitative phase microscopy technique based on parallel two-step phase-shifting on-axis interferometry to visualize live biological cells and microorganism dynamics. The parallel on-axis holographic approach is more efficient with camera spatial bandwidth consumption compared to previous off-axis approaches and thus can capture finer sample spatial details, given a limited spatial bandwidth of a specific digital camera. Additionally, due to the parallel acquisition mechanism, the approach is suitable for visualizing rapid dynamic processes, permitting an interferometric acquisition rate equal to the camera frame rate. The method is demonstrated experimentally through phase microscopy of neurons and unicellular microorganisms.

  8. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  9. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  10. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  11. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  12. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  13. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  14. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  15. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  16. Digital parallel frequency-domain spectroscopy for tissue imaging

    NASA Astrophysics Data System (ADS)

    Arnesano, Cosimo; Santoro, Ylenia; Gratton, Enrico

    2012-09-01

    Near-infrared (NIR) (650 to 1000 nm) optical properties of turbid media can be quantified accurately and noninvasively using methods based on diffuse reflectance or transmittance, such as frequency-domain photon migration (FDPM). Conventional FDPM techniques based on white-light steady-state (SS) spectral measurements in conjunction with the acquisition of frequency-domain (FD) data at selected wavelengths using laser diodes are used to measure broadband NIR scattering-corrected absorption spectra of turbid media. These techniques are limited by the number of wavelength points used to obtain FD data and by the sweeping technique used to collect FD data over a relatively large range. We have developed a method that introduces several improvements in the acquisition of optical parameters, based on the digital parallel acquisition of a comb of frequencies and on the use of a white laser as a single light source for both FD and SS measurements. The source, due to the high brightness, allows a higher penetration depth with an extremely low power on the sample. The parallel acquisition decreases the time required by standard serial systems that scan through a range of modulation frequencies. Furthermore, all-digital acquisition removes analog noise, avoids the analog mixer, and does not create radiofrequency interference or emission.

  17. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  18. Parallel Computing in Optimization.

    DTIC Science & Technology

    1984-10-01

    include : Heller [1978] and Sameh [1977] (surveys of algorithms), Duff [1983], Fong and Jordan [1977]. Jordan [1979]. and Rodrigue [1982] (all mainly...constrained concave function by partition of feasible domain", Mathematics of Operations Research 8, pp. A. Sameh [1977, "Numerical parallel algorithms...a survey", in High Speed Computer and Algorithm Organization, D. Kuck, D. Lawrie, and A. Sameh , eds., Academic Press, pp. 207-228. 1,. J. Siegel

  19. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  20. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  1. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  2. Extendability of parallel sections in vector bundles

    NASA Astrophysics Data System (ADS)

    Kirschner, Tim

    2016-01-01

    I address the following question: Given a differentiable manifold M, what are the open subsets U of M such that, for all vector bundles E over M and all linear connections ∇ on E, any ∇-parallel section in E defined on U extends to a ∇-parallel section in E defined on M? For simply connected manifolds M (among others) I describe the entirety of all such sets U which are, in addition, the complement of a C1 submanifold, boundary allowed, of M. This delivers a partial positive answer to a problem posed by Antonio J. Di Scala and Gianni Manno (2014). Furthermore, in case M is an open submanifold of Rn, n ≥ 2, I prove that the complement of U in M, not required to be a submanifold now, can have arbitrarily large n-dimensional Lebesgue measure.

  3. Resistance to extinction after schedules of partial delay or partial reinforcement in rats with hippocampal lesions.

    PubMed

    Rawlins, J N; Feldon, J; Ursin, H; Gray, J A

    1985-01-01

    Two experimental procedures were employed to establish the reason why hippocampal lesions apparently block the development of tolerance for aversive events in partial reinforcement experiments, but do not do so in partial punishment experiments. Rats were trained to run in a straight alley following hippocampal lesions (HC), cortical control lesions (CC) or sham operations (SO), and resistance to extinction was assessed following differing acquisition conditions. In Experiment 1 a 4-8 min inter-trial interval (ITI) was used. Either every acquisition trial was rewarded immediately (Continuous Reinforcement, CR), or only a randomly selected half of the trials were immediately rewarded, the reward being delayed for thirty seconds on the other trials (Partial Delay, PD). This delay procedure produced increased resistance to extinction in rats in all lesion groups. In Experiment 2 the ITI was reduced to a few seconds, and rats were trained either on a CR schedule, or on a schedule in which only half the trials were rewarded (Partial Reinforcement, PR). This form of partial reinforcement procedure also produced increased resistance to extinction in rats in all lesion groups. It thus appears that hippocampal lesions only prevent the development of resistance to aversive events when the interval between aversive and subsequent appetitive events exceeds some minimum value.

  4. Human target acquisition performance

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Du Bosq, Todd W.; Reynolds, Joseph P.; Thompson, Roger; Aghera, Sameer; Moyer, Steven K.; Flug, Eric; Espinola, Richard; Hixson, Jonathan

    2012-06-01

    The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.

  5. STIS target acquisition

    NASA Technical Reports Server (NTRS)

    Kraemer, Steve; Downes, Ron; Katsanis, Rocio; Crenshaw, Mike; McGrath, Melissa; Robinson, Rich

    1997-01-01

    We describe the STIS autonomous target acquisition capabilities. We also present the results of dedicated tests executed as part of Cycle 7 calibration, following post-launch improvements to the Space Telescope Imaging Spectrograph (STIS) flight software. The residual pointing error from the acquisitions are < 0.5 CCD pixels, which is better than preflight estimates. Execution of peakups show clear improvement of target centering for slits of width 0.1 sec or smaller. These results may be used by Guest Observers in planning target acquisitions for their STIS programs.

  6. Interactive knowledge acquisition tools

    NASA Technical Reports Server (NTRS)

    Dudziak, Martin J.; Feinstein, Jerald L.

    1987-01-01

    The problems of designing practical tools to aid the knowledge engineer and general applications used in performing knowledge acquisition tasks are discussed. A particular approach was developed for the class of knowledge acquisition problem characterized by situations where acquisition and transformation of domain expertise are often bottlenecks in systems development. An explanation is given on how the tool and underlying software engineering principles can be extended to provide a flexible set of tools that allow the application specialist to build highly customized knowledge-based applications.

  7. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  8. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  9. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  10. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  11. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  12. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  13. Acquisitions for Area Programs

    ERIC Educational Resources Information Center

    Stevens, Robert D.

    1970-01-01

    Common policies, practices, and trends in acquisitions in the complex field of area studies, including the weak structure of the book trade, the lack of bibliographic control, and current cooperative efforts. (JS)

  14. Data acquisition system

    DOEpatents

    Shapiro, Stephen L.; Mani, Sudhindra; Atlas, Eugene L.; Cords, Dieter H. W.; Holbrook, Britt

    1997-01-01

    A data acquisition circuit for a particle detection system that allows for time tagging of particles detected by the system. The particle detection system screens out background noise and discriminate between hits from scattered and unscattered particles. The detection system can also be adapted to detect a wide variety of particle types. The detection system utilizes a particle detection pixel array, each pixel containing a back-biased PIN diode, and a data acquisition pixel array. Each pixel in the particle detection pixel array is in electrical contact with a pixel in the data acquisition pixel array. In response to a particle hit, the affected PIN diodes generate a current, which is detected by the corresponding data acquisition pixels. This current is integrated to produce a voltage across a capacitor, the voltage being related to the amount of energy deposited in the pixel by the particle. The current is also used to trigger a read of the pixel hit by the particle.

  15. Documentation and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel; Moseley, Warren

    1990-01-01

    Traditional approaches to knowledge acquisition have focused on interviews. An alternative focuses on the documentation associated with a domain. Adopting a documentation approach provides some advantages during familiarization. A knowledge management tool was constructed to gain these advantages.

  16. Evidence for parallel elongated structures in the mesosphere

    NASA Technical Reports Server (NTRS)

    Adams, G. W.; Brosnahan, J. W.; Walden, D. C.

    1983-01-01

    The physical cause of partial reflection from the mesosphere is of interest. Data are presented from an image-forming radar at Brighton, Colorado, that suggest that some of the radar scattering is caused by parallel elongated structures lying almost directly overhead. Possible physical sources for such structures include gravity waves and roll vortices.

  17. Introduction to Acquisition Management.

    DTIC Science & Technology

    1987-12-01

    thus to avoid the logistics support and inventory management iightinares that have often resulted from poor configuration management on past programs...11-114G 60 1/4 ii UNCASSIFID I H. 1 .251 1 . 4 1. MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS- 1963-A .TO. ACQUISITION * MANAGEMENT ...PROJECT TASK WORK UNIT ELEMENT NO. NO. NO ACCESSION NO. I11. TITLE (include Secuti.ty Classift(ation) Introduction to Acquisition Management (UNCLAS) 12

  18. FOS Target Acquisition Test

    NASA Astrophysics Data System (ADS)

    Koratkar, Anuradha

    1994-01-01

    FOS onboard target acquisition software capabilities will be verified by this test -- point source binary, point source firmware, point source peak-up, wfpc2 assisted realtime, point source peak-down, taled assisted binary, taled assisted firmware, and nth star binary modes. The primary modes are tested 3 times to determine repeatability. This test is the only test that will verify mode-to-mode acquisition offsets. This test has to be conducted for both the RED and BLUE detectors.

  19. Army Acquisition Lessons Learned

    DTIC Science & Technology

    2014-10-01

    analysis on the lessons learned. Acquisition Lessons Learned Portal (ALLP) and Lessons Learned Collection CAALL has established the ALLP as the...PEOs) and their project offices, as well as the broader acquisition community. The primary function of the portal is to allow easy input and retrieval...download- able form that can be completed offline and then uploaded to the portal . This allows the form to be filled out and distrib- uted through

  20. Acquisition Support Program Overview

    DTIC Science & Technology

    2016-06-30

    Feedback from direct support and community learning improves ASP practices & SEI technologies © 2006 by Carnegie Mellon University page 6 ASP...the acquisition infrastructure throughout the DoD, Federal Agency and other acquirer communities . © 2006 by Carnegie Mellon University page 3...Seminars • Tailored learning via Acquisition Communities of Practice • Army, Navy, Air Force, Defense and Intel Agencies • Software Collaborator’s Network

  1. The Structure of Parallel Algorithms.

    DTIC Science & Technology

    1979-08-01

    parallel architectures and parallel algorithms see [Anderson and Jensen 75, Stone 75, Kung 76, Enslow 77, Kuck 77, Ramamoorthy and Li 77, Sameh 77, Heller...the Routing Time on a Parallel Computer with a Fixed Interconnection Network, In Kuck., D. J., Lawrie, D.H. and Sameh , A.H., editor, High Speed...Letters 5(4):107-112, October 1976. [ Sameh 77] Sameh , A.H. Numerical Parallel Algorithms -- A Survey. In Hifh Speed Computer and AlgorLthm Organization

  2. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  3. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  4. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  5. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  6. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  7. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  8. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  9. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  10. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  11. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-01-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  12. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  13. Parallel Eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.

    1989-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is utilized in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. Assembly, elimination and back-substitution of degrees of freedom are performed concurrently, using a number of fronts. All fronts converge to and diverge from a predefined global front during elimination and back-substitution, respectively. In the meantime, reduction of the stiffness and mass matrices required by the modified subspace method can be completed during the convergence/divergence cycle and an estimate of the required eigenpairs obtained. Successive cycles of convergence and divergence are repeated until the desired accuracy of calculations is achieved. The advantages of this new algorithm in parallel computer architecture are discussed.

  14. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  15. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  16. Parallel ptychographic reconstruction

    SciTech Connect

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.

  17. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  18. Partial Torus Instability

    NASA Astrophysics Data System (ADS)

    Olmedo, Oscar; Zhang, J.

    2010-05-01

    Flux ropes are now generally accepted to be the magnetic configuration of Coronal Mass Ejections (CMEs), which may be formed prior or during solar eruptions. In this study, we model the flux rope as a current-carrying partial torus loop with its two footpoints anchored in the photosphere, and investigate its instability in the context of the torus instability (TI). Previous studies on TI have focused on the configuration of a circular torus and revealed the existence of a critical decay index. Our study reveals that the critical index is a function of the fractional number of the partial torus, defined by the ratio between the arc length of the partial torus above the photosphere and the circumference of a circular torus of equal radius. We refer to this finding the partial torus instability (PTI). It is found that a partial torus with a smaller fractional number has a smaller critical index, thus requiring a more gradually decreasing magnetic field to stabilize the flux rope. On the other hand, the partial torus with a larger fractional number has a larger critical index. In the limit of a circular torus when the fractional number approaches one, the critical index goes to a maximum value that depends on the distribution of the external magnetic field. We demonstrate that the partial torus instability helps us to understand the confinement, growth, and eventual eruption of a flux rope CME.

  19. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  20. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  1. Twisted partially pure spinors

    NASA Astrophysics Data System (ADS)

    Herrera, Rafael; Tellez, Ivan

    2016-08-01

    Motivated by the relationship between orthogonal complex structures and pure spinors, we define twisted partially pure spinors in order to characterize spinorially subspaces of Euclidean space endowed with a complex structure.

  2. Parallel acoustic delay lines for photoacoustic tomography

    PubMed Central

    Yapici, Murat Kaya; Kim, Chulhong; Chang, Cheng-Chung; Jeon, Mansik; Guo, Zijian; Cai, Xin

    2012-01-01

    Abstract. Achieving real-time photoacoustic (PA) tomography typically requires multi-element ultrasound transducer arrays and their associated multiple data acquisition (DAQ) electronics to receive PA waves simultaneously. We report the first demonstration of a photoacoustic tomography (PAT) system using optical fiber-based parallel acoustic delay lines (PADLs). By employing PADLs to introduce specific time delays, the PA signals (on the order of a few micro seconds) can be forced to arrive at the ultrasonic transducers at different times. As a result, time-delayed PA signals in multiple channels can be ultimately received and processed in a serial manner with a single-element transducer, followed by single-channel DAQ electronics. Our results show that an optically absorbing target in an optically scattering medium can be photoacoustically imaged using the newly developed PADL-based PAT system. Potentially, this approach could be adopted to significantly reduce the complexity and cost of ultrasonic array receiver systems. PMID:23139043

  3. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  4. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  6. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  7. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  8. Partially coherent nonparaxial beams.

    PubMed

    Duan, Kailiang; Lü, Baida

    2004-04-15

    The concept of a partially coherent nonparaxial beam is proposed. A closed-form expression for the propagation of nonparaxial Gaussian Schell model (GSM) beams in free space is derived and applied to study the propagation properties of nonparaxial GSM beams. It is shown that for partially coherent nonparaxial beams a new parameter f(sigma) has to be introduced, which together with the parameter f, determines the beam nonparaxiality.

  9. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  10. PARTIAL TORUS INSTABILITY

    SciTech Connect

    Olmedo, Oscar; Zhang Jie

    2010-07-20

    Flux ropes are now generally accepted to be the magnetic configuration of coronal mass ejections (CMEs), which may be formed prior to or during solar eruptions. In this study, we model the flux rope as a current-carrying partial torus loop with its two footpoints anchored in the photosphere, and investigate its stability in the context of the torus instability (TI). Previous studies on TI have focused on the configuration of a circular torus and revealed the existence of a critical decay index of the overlying constraining magnetic field. Our study reveals that the critical index is a function of the fractional number of the partial torus, defined by the ratio between the arc length of the partial torus above the photosphere and the circumference of a circular torus of equal radius. We refer to this finding as the partial torus instability (PTI). It is found that a partial torus with a smaller fractional number has a smaller critical index, thus requiring a more gradually decreasing magnetic field to stabilize the flux rope. On the other hand, a partial torus with a larger fractional number has a larger critical index. In the limit of a circular torus when the fractional number approaches 1, the critical index goes to a maximum value. We demonstrate that the PTI helps us to understand the confinement, growth, and eventual eruption of a flux-rope CME.

  11. Partial Torus Instability

    NASA Astrophysics Data System (ADS)

    Olmedo, Oscar; Zhang, Jie

    2010-07-01

    Flux ropes are now generally accepted to be the magnetic configuration of coronal mass ejections (CMEs), which may be formed prior to or during solar eruptions. In this study, we model the flux rope as a current-carrying partial torus loop with its two footpoints anchored in the photosphere, and investigate its stability in the context of the torus instability (TI). Previous studies on TI have focused on the configuration of a circular torus and revealed the existence of a critical decay index of the overlying constraining magnetic field. Our study reveals that the critical index is a function of the fractional number of the partial torus, defined by the ratio between the arc length of the partial torus above the photosphere and the circumference of a circular torus of equal radius. We refer to this finding as the partial torus instability (PTI). It is found that a partial torus with a smaller fractional number has a smaller critical index, thus requiring a more gradually decreasing magnetic field to stabilize the flux rope. On the other hand, a partial torus with a larger fractional number has a larger critical index. In the limit of a circular torus when the fractional number approaches 1, the critical index goes to a maximum value. We demonstrate that the PTI helps us to understand the confinement, growth, and eventual eruption of a flux-rope CME.

  12. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  13. The acquisition of strategic knowledge

    SciTech Connect

    Gruber, T.R.

    1989-01-01

    This research focuses on the problem of acquiring strategic knowledge-knowledge used by an agent to decide what action to perform next. Strategic knowledge is especially difficult to acquire from experts by conventional methods, and it is typically implemented with low-level primitives by a knowledge engineer. This dissertation presents a method for partially automating the acquisition of strategic knowledge from experts. The method consists of a representation for strategic knowledge, a technique for eliciting strategy from experts, and a learning procedure for transforming the information elicited from experts into operational and general form. The knowledge representation is formulated as strategy rules that associate strategic situations with equivalence classes of appropriate actions. The elicitation technique is based on a language of justifications with which the expert explains why a knowledge system should have chosen a particular action in a specific strategic situation. The learning procedure generates strategy rules from expert justifications in training cases, and generalizes newly-formed rules using syntactic induction operators. The knowledge acquisition procedure is embodied in an interactive program called ASK, which actively elicits justifications and new terms from the expert and generates operational strategy rules. ASK has been used by physicians to extend the strategic knowledge for a chest pain diagnosis application and by knowledge engineers to build a general strategy for the task of prospective diagnosis. A major conclusion is that expressive power can be traded for acquirability. By restricting the form of the representation of strategic knowledge, ASK can present a comprehensible knowledge elicitation medium to the expert and employ well-understood syntactic generalization operations to learn from the expert's explanations.

  14. Combinatorial parallel and scientific computing.

    SciTech Connect

    Pinar, Ali; Hendrickson, Bruce Alan

    2005-04-01

    Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.

  15. Reconfigurable Embedded System for Electrocardiogram Acquisition.

    PubMed

    Kay, Marcel Seiji; Iaione, Fábio

    2015-01-01

    Smartphones include features that offers the chance to develop mobile systems in medical field, resulting in an area called mobile-health. One of the most common medical examinations is the electrocardiogram (ECG), which allows the diagnosis of various heart diseases, leading to preventative measures and preventing more serious problems. The objective of this study was to develop a wireless reconfigurable embedded system using a FPAA (Field Programmable Analog Array), for the acquisition of ECG signals, and an application showing and storing these signals on Android smartphones. The application also performs the partial FPAA reconfiguration in real time (adjustable gain). Previous studies using FPAA usually use the development boards provided by the manufacturer (high cost), do not allow the reconfiguration in real time, use no smartphone and communicate via cables. The parameters tested in the acquisition circuit and the quality of ECGs registered in an individual were satisfactory.

  16. On Shaft Data Acquisition System (OSDAS)

    NASA Technical Reports Server (NTRS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  17. A simple noniterative principal component technique for rapid noise reduction in parallel MR images.

    PubMed

    Patel, Anand S; Duan, Qi; Robson, Philip M; McKenzie, Charles A; Sodickson, Daniel K

    2012-01-01

    The utilization of parallel imaging permits increased MR acquisition speed and efficiency; however, parallel MRI usually leads to a deterioration in the signal-to-noise ratio when compared with otherwise equivalent unaccelerated acquisitions. At high accelerations, the parallel image reconstruction matrix tends to become dominated by one principal component. This has been utilized to enable substantial reductions in g-factor-related noise. A previously published technique achieved noise reductions via a computationally intensive search for multiples of the dominant singular vector which, when subtracted from the image, minimized joint entropy between the accelerated image and a reference image. We describe a simple algorithm that can accomplish similar results without a time-consuming search. Significant reductions in g-factor-related noise were achieved using this new algorithm with in vivo acquisitions at 1.5 T with an eight-element array. Copyright © 2011 John Wiley & Sons, Ltd.

  18. 2000-fold parallelized dual-color STED fluorescence nanoscopy.

    PubMed

    Bergermann, Fabian; Alber, Lucas; Sahl, Steffen J; Engelhardt, Johann; Hell, Stefan W

    2015-01-12

    Stimulated Emission Depletion (STED) nanoscopy enables multi-color fluorescence imaging at the nanometer scale. Its typical single-point scanning implementation can lead to long acquisition times. In order to unleash the full spatiotemporal resolution potential of STED nanoscopy, parallelized scanning is mandatory. Here we present a dual-color STED nanoscope utilizing two orthogonally crossed standing light waves as a fluorescence switch-off pattern, and providing a resolving power down to 30 nm. We demonstrate the imaging capabilities in a biological context for immunostained vimentin fibers in a circular field of view of 20 µm diameter at 2000-fold parallelization (i.e. 2000 "intensity minima"). The technical feasibility of massively parallelizing STED without significant compromises in resolution heralds video-rate STED nanoscopy of large fields of view, pending the availability of suitable high-speed detectors.

  19. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  20. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  1. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  2. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  3. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  4. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  5. Competition in Defense Acquisitions

    DTIC Science & Technology

    2009-02-01

    responsibility increasingly required to be shared by both sectors. Operating at the nexus of public and private interests, the Center researches...RAND, Santa Monica, CA. 42. Scherer, Frederic M. 1964 The Weapons Acquisition Process, Economic Incentives. Boston: Harvard University. 43

  6. Data Acquisition Backend

    SciTech Connect

    Britton Jr., Charles L.; Ezell, N. Dianne Bull; Roberts, Michael

    2013-10-01

    This document is intended to summarize the development and testing of the data acquisition module portion of the Johnson Noise Thermometry (JNT) system developed at ORNL. The proposed system has been presented in an earlier report [1]. A more extensive project background including the project rationale is available in the initial project report [2].

  7. Telecommunications and data acquisition

    NASA Technical Reports Server (NTRS)

    Renzetti, N. A. (Editor)

    1981-01-01

    Deep Space Network progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. In addition, developments in Earth based radio technology as applied to geodynamics, astrophysics, and the radio search for extraterrestrial intelligence are reported.

  8. Acquisition of Comparison Constructions

    ERIC Educational Resources Information Center

    Hohaus, Vera; Tiemann, Sonja; Beck, Sigrid

    2014-01-01

    This article presents a study on the time course of the acquisition of comparison constructions. The order in which comparison constructions (comparatives, measure phrases, superlatives, degree questions, etc.) show up in English- and German-learning children's spontaneous speech is quite fixed. It is shown to be insufficiently determined by…

  9. Acquisition at DISA

    DTIC Science & Technology

    2008-08-08

    CoreNet Global Service Mgmt RACE SME-PED Senior Decision Authorities (7)Component Acquisition Executive Command & Control Capabilities ( C2C ) SATCOM...Martin Gross (VCAE) Brig Gen Hoene, USAF ( C2C ) Sherrie Balko (Acting) (STS) Mark Orndorff (IAN) Becky Harris (GES) Program Executive Offices (4

  10. Merger and acquisition medicine.

    PubMed

    Powell, G S

    1997-01-01

    This discussion of the ramifications of corporate mergers and acquisitions for employees recognizes that employee adaptation to the change can be a long and complex process. The author describes a role the occupational physician can take in helping to minimize the potential adverse health impact of major organizational change.

  11. Acquisition of Comparison Constructions

    ERIC Educational Resources Information Center

    Hohaus, Vera; Tiemann, Sonja; Beck, Sigrid

    2014-01-01

    This article presents a study on the time course of the acquisition of comparison constructions. The order in which comparison constructions (comparatives, measure phrases, superlatives, degree questions, etc.) show up in English- and German-learning children's spontaneous speech is quite fixed. It is shown to be insufficiently determined by…

  12. Surviving mergers & acquisitions.

    PubMed

    Dixon, Diane L

    2002-01-01

    Mergers and acquisitions are never easy to implement. The health care landscape is a minefield of failed mergers and uneasy alliances generating great turmoil and pain. But some mergers have been successful, creating health systems that benefit the communities they serve. Five prominent leaders offer their advice on minimizing the difficulties of M&As.

  13. Acquisitions List No. 43.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  14. Acquisitions List No. 42.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  15. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  16. [Acquisition of arithmetic knowledge].

    PubMed

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).

  17. Coordinating Council. Seventh Meeting: Acquisitions

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The theme for this NASA Scientific and Technical Information Program Coordinating Council meeting was Acquisitions. In addition to NASA and the NASA Center for AeroSpace Information (CASI) presentations, the report contains fairly lengthy visuals about acquisitions at the Defense Technical Information Center. CASI's acquisitions program and CASI's proactive acquisitions activity were described. There was a presentation on the document evaluation process at CASI. A talk about open literature scope and coverage at the American Institute of Aeronautics and Astronautics was also given. An overview of the STI Program's Acquisitions Experts Committee was given next. Finally acquisitions initiatives of the NASA STI program were presented.

  18. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  19. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    1999-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  20. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-17

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  1. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-24

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  2. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    2001-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  3. Oxygen partial pressure sensor

    DOEpatents

    Dees, Dennis W.

    1994-01-01

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured.

  4. Oxygen partial pressure sensor

    DOEpatents

    Dees, D.W.

    1994-09-06

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured. 1 fig.

  5. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    1999-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  6. 78 FR 37164 - Land Acquisitions: Appeals of Land Acquisition Decisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ...; Docket ID: BIA-2013-0005] RIN 1076-AF15 Land Acquisitions: Appeals of Land Acquisition Decisions AGENCY... applications to acquire land in trust under 25 CFR part 151. This document makes corrections to the ADDRESSES...

  7. Partial Arc Curvilinear Direct Drive Servomotor

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong (Inventor)

    2014-01-01

    A partial arc servomotor assembly having a curvilinear U-channel with two parallel rare earth permanent magnet plates facing each other and a pivoted ironless three phase coil armature winding moves between the plates. An encoder read head is fixed to a mounting plate above the coil armature winding and a curvilinear encoder scale is curved to be co-axis with the curvilinear U-channel permanent magnet track formed by the permanent magnet plates. Driven by a set of miniaturized power electronics devices closely looped with a positioning feedback encoder, the angular position and velocity of the pivoted payload is programmable and precisely controlled.

  8. Partially strong WW scattering

    SciTech Connect

    Cheung Kingman; Chiang Chengwei; Yuan Tzuchiang

    2008-09-01

    What if only a light Higgs boson is discovered at the CERN LHC? Conventional wisdom tells us that the scattering of longitudinal weak gauge bosons would not grow strong at high energies. However, this is generally not true. In some composite models or general two-Higgs-doublet models, the presence of a light Higgs boson does not guarantee complete unitarization of the WW scattering. After partial unitarization by the light Higgs boson, the WW scattering becomes strongly interacting until it hits one or more heavier Higgs bosons or other strong dynamics. We analyze how LHC experiments can reveal this interesting possibility of partially strong WW scattering.

  9. A transputer-based list mode parallel system for digital radiography with 2D silicon detectors

    SciTech Connect

    Conti, M.; Russo, P.; Scarlatella, A. . Dipt. di Scienze Fisiche and INFN); Del Guerra, A. . Dipt. di Fisica and INFN); Mazzeo, A.; Mazzocca, N.; Russo, S. . Dipt. di Informatica e Sistemistica)

    1993-08-01

    The authors believe that a dedicated parallel computer system can represent an effective and flexible approach to the problem of list mode acquisition and reconstruction of digital radiographic images obtained with a double-sided silicon microstrip detector. They present a Transputer-based implementation of a parallel system for the data acquisition and image reconstruction from a silicon crystal with 200[mu]m read-out pitch. They are currently developing a prototype of the system connected to a detector with a 10mm[sup 2] sensitive area.

  10. Partially orthogonal resonators for magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Chacon-Caldera, Jorge; Malzacher, Matthias; Schad, Lothar R.

    2017-02-01

    Resonators for signal reception in magnetic resonance are traditionally planar to restrict coil material and avoid coil losses. Here, we present a novel concept to model resonators partially in a plane with maximum sensitivity to the magnetic resonance signal and partially in an orthogonal plane with reduced signal sensitivity. Thus, properties of individual elements in coil arrays can be modified to optimize physical planar space and increase the sensitivity of the overall array. A particular case of the concept is implemented to decrease H-field destructive interferences in planar concentric in-phase arrays. An increase in signal to noise ratio of approximately 20% was achieved with two resonators placed over approximately the same planar area compared to common approaches at a target depth of 10 cm at 3 Tesla. Improved parallel imaging performance of this configuration is also demonstrated. The concept can be further used to increase coil density.

  11. Partially orthogonal resonators for magnetic resonance imaging

    PubMed Central

    Chacon-Caldera, Jorge; Malzacher, Matthias; Schad, Lothar R.

    2017-01-01

    Resonators for signal reception in magnetic resonance are traditionally planar to restrict coil material and avoid coil losses. Here, we present a novel concept to model resonators partially in a plane with maximum sensitivity to the magnetic resonance signal and partially in an orthogonal plane with reduced signal sensitivity. Thus, properties of individual elements in coil arrays can be modified to optimize physical planar space and increase the sensitivity of the overall array. A particular case of the concept is implemented to decrease H-field destructive interferences in planar concentric in-phase arrays. An increase in signal to noise ratio of approximately 20% was achieved with two resonators placed over approximately the same planar area compared to common approaches at a target depth of 10 cm at 3 Tesla. Improved parallel imaging performance of this configuration is also demonstrated. The concept can be further used to increase coil density. PMID:28186135

  12. Partially orthogonal resonators for magnetic resonance imaging.

    PubMed

    Chacon-Caldera, Jorge; Malzacher, Matthias; Schad, Lothar R

    2017-02-10

    Resonators for signal reception in magnetic resonance are traditionally planar to restrict coil material and avoid coil losses. Here, we present a novel concept to model resonators partially in a plane with maximum sensitivity to the magnetic resonance signal and partially in an orthogonal plane with reduced signal sensitivity. Thus, properties of individual elements in coil arrays can be modified to optimize physical planar space and increase the sensitivity of the overall array. A particular case of the concept is implemented to decrease H-field destructive interferences in planar concentric in-phase arrays. An increase in signal to noise ratio of approximately 20% was achieved with two resonators placed over approximately the same planar area compared to common approaches at a target depth of 10 cm at 3 Tesla. Improved parallel imaging performance of this configuration is also demonstrated. The concept can be further used to increase coil density.

  13. 33. Perimeter acquisition radar building room #320, perimeter acquisition radar ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. Perimeter acquisition radar building room #320, perimeter acquisition radar operations center (PAROC), contains the tactical command and control group equipment required to control the par site. Showing spacetrack monitor console - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  14. 48 CFR 352.234-4 - Partial earned value management system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... management system. 352.234-4 Section 352.234-4 Federal Acquisition Regulations System HEALTH AND HUMAN....234-4 Partial earned value management system. As prescribed in 334.203-70(d), the Contracting Officer shall insert the following clause: Partial Earned Value Management System (October 2008) (a)...

  15. Parallel Computational Protein Design

    PubMed Central

    Zhou, Yichao; Donald, Bruce R.; Zeng, Jianyang

    2016-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab [1] to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE [2] and DEEPer [3] to also consider continuous backbone and side-chain flexibility. PMID:27914056

  16. Net radiation method for enclosure systems involving partially transparent walls

    NASA Technical Reports Server (NTRS)

    Siegel, R.

    1973-01-01

    The net radiation method is developed for analyzing radiation heat transfer in enclosure systems involving partially transparent walls. One such system is an enclosure with windows in it. The conventional net radiation method was developed for enclosures having opaque walls. If a partially transparent wall is present, it will permit radiation to enter and leave the enclosure. The net radiation equations are developed here for gray and semigray enclosures with one or more windows. Another system of interest, such as in a flat plate solar collector, consists of a series of parallel transparent layers. The transmission characteristics of such window systems are obtained by the net radiation method, and the technique appears to be more convenient than the ray tracing method which has been used in the past. Relations are developed for windows consisting of any number of parallel layers having differing absorption coefficients and differing surface reflectivities, and for systems composed of parallel transmitting layers and opaque plates.

  17. Sequential and Parallel Matrix Computations.

    DTIC Science & Technology

    1985-11-01

    Theory" published by the American Math Society. (C) Jointly with A. Sameh of University of Illinois, a parallel algorithm for the single-input pole...an M.Sc. thesis at Northern Illinois University by Ava Chun and, the results were compared with parallel Q-R algorithm of Sameh and Kuck and the

  18. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsh, Richard S.

    1988-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  19. Parallel pseudospectral domain decomposition techniques

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Hirsch, Richard S.

    1989-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  20. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  1. The Acquisition of Serial Publications

    ERIC Educational Resources Information Center

    Huff, William H.

    1970-01-01

    Results of a questionnaire survey of large research libraries on procedures and special problems in serials acquisitions. Cooperative programs are discussed, including the Latin American Cooperative Acquisitions Program and the National Serials Data Program. (JS)

  2. Partial gravity habitat study

    NASA Technical Reports Server (NTRS)

    Capps, Stephen; Lorandos, Jason; Akhidime, Eval; Bunch, Michael; Lund, Denise; Moore, Nathan; Murakawa, Kiosuke

    1989-01-01

    The purpose of this study is to investigate comprehensive design requirements associated with designing habitats for humans in a partial gravity environment, then to apply them to a lunar base design. Other potential sites for application include planetary surfaces such as Mars, variable-gravity research facilities, and a rotating spacecraft. Design requirements for partial gravity environments include locomotion changes in less than normal earth gravity; facility design issues, such as interior configuration, module diameter, and geometry; and volumetric requirements based on the previous as well as psychological issues involved in prolonged isolation. For application to a lunar base, it is necessary to study the exterior architecture and configuration to insure optimum circulation patterns while providing dual egress; radiation protection issues are addressed to provide a safe and healthy environment for the crew; and finally, the overall site is studied to locate all associated facilities in context with the habitat. Mission planning is not the purpose of this study; therefore, a Lockheed scenario is used as an outline for the lunar base application, which is then modified to meet the project needs. The goal of this report is to formulate facts on human reactions to partial gravity environments, derive design requirements based on these facts, and apply the requirements to a partial gravity situation which, for this study, was a lunar base.

  3. Swallowed partial dentures

    PubMed Central

    Hashmi, Syed; Walter, John; Smith, Wendy; Latis, Sergios

    2004-01-01

    Swallowed or inhaled partial dentures can present a diagnostic challenge. Three new cases are described, one of them near-fatal because of vascular erosion and haemorrhage. The published work points to the importance of good design and proper maintenance. The key to early recognition is awareness of the hazard by denture-wearers, carers and clinicians. PMID:14749401

  4. Dilemmas of partial cooperation.

    PubMed

    Stark, Hans-Ulrich

    2010-08-01

    Related to the often applied cooperation models of social dilemmas, we deal with scenarios in which defection dominates cooperation, but an intermediate fraction of cooperators, that is, "partial cooperation," would maximize the overall performance of a group of individuals. Of course, such a solution comes at the expense of cooperators that do not profit from the overall maximum. However, because there are mechanisms accounting for mutual benefits after repeated interactions or through evolutionary mechanisms, such situations can constitute "dilemmas" of partial cooperation. Among the 12 ordinally distinct, symmetrical 2 x 2 games, three (barely considered) variants are correspondents of such dilemmas. Whereas some previous studies investigated particular instances of such games, we here provide the unifying framework and concisely relate it to the broad literature on cooperation in social dilemmas. Complementing our argumentation, we study the evolution of partial cooperation by deriving the respective conditions under which coexistence of cooperators and defectors, that is, partial cooperation, can be a stable outcome of evolutionary dynamics in these scenarios. Finally, we discuss the relevance of such models for research on the large biodiversity and variation in cooperative efforts both in biological and social systems.

  5. Partial polarizer filter

    NASA Technical Reports Server (NTRS)

    Title, A. M. (Inventor)

    1978-01-01

    A birefringent filter module comprises, in seriatum. (1) an entrance polarizer, (2) a first birefringent crystal responsive to optical energy exiting the entrance polarizer, (3) a partial polarizer responsive to optical energy exiting the first polarizer, (4) a second birefringent crystal responsive to optical energy exiting the partial polarizer, and (5) an exit polarizer. The first and second birefringent crystals have fast axes disposed + or -45 deg from the high transmitivity direction of the partial polarizer. Preferably, the second crystal has a length 1/2 that of the first crystal and the high transmitivity direction of the partial polarizer is nine times as great as the low transmitivity direction. To provide tuning, the polarizations of the energy entering the first crystal and leaving the second crystal are varied by either rotating the entrance and exit polarizers, or by sandwiching the entrance and exit polarizers between pairs of half wave plates that are rotated relative to the polarizers. A plurality of the filter modules may be cascaded.

  6. Partial wave analysis using graphics processing units

    NASA Astrophysics Data System (ADS)

    Berger, Niklaus; Beijiang, Liu; Jike, Wang

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  7. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  8. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  9. UC Merced NMR Instrumentation Acquisition

    DTIC Science & Technology

    2015-06-18

    UC Merced NMR Instrumentation Acquisition For the UC Merced NMR Instrumentation Acquisition proposal, a new 400 MHz and an upgraded 500 MHz NMR ...UC Merced NMR Instrumentation Acquisition Report Title For the UC Merced NMR Instrumentation Acquisition proposal, a new 400 MHz and an upgraded 500...MHz NMR have been delivered, installed, and incorporated into research and two lab courses. While no results from these instruments have been

  10. Managing Radical Change in Acquisition

    DTIC Science & Technology

    1998-01-01

    some process innovation, acquisition continues to plague the Defense System and constrain battlefield mobility, information, and speed. Following the... System & Acquisition Management,Monterey,CA,93943 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES...Mark Nissen is Assistant Professor of Information Systems and Acquisition Management at the Naval Postgraduate School (NPS) in Monterey, CA. He

  11. First Language Acquisition and Teaching

    ERIC Educational Resources Information Center

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  12. Locative Terms and Warlpiri Acquisition.

    ERIC Educational Resources Information Center

    Bavin, Edith L.

    1990-01-01

    Focuses on the influence of language specific properties in the acquisition of locative expressions. Some of the claims from literature on the acquisition of locative expressions are discussed and data from the acquisition of Warlpiri are presented and discussed in terms of these claims. (Author/CB)

  13. Microform Developments Related to Acquisitions

    ERIC Educational Resources Information Center

    Sullivan, Robert C.

    1973-01-01

    Libraries are spending an increasing amount of money on acquisitions and an expanding portion of these expenditures is for microforms. It behooves the acquisitions librarian to be aware of changes and developments in microforms and to interpret what effect they will have on acquisitions. (50 references) (Author/SJ)

  14. Automatic carrier acquisition system

    NASA Technical Reports Server (NTRS)

    Bunce, R. C. (Inventor)

    1973-01-01

    An automatic carrier acquisition system for a phase locked loop (PLL) receiver is disclosed. It includes a local oscillator, which sweeps the receiver to tune across the carrier frequency uncertainty range until the carrier crosses the receiver IF reference. Such crossing is detected by an automatic acquisition detector. It receives the IF signal from the receiver as well as the IF reference. It includes a pair of multipliers which multiply the IF signal with the IF reference in phase and in quadrature. The outputs of the multipliers are filtered through bandpass filters and power detected. The output of the power detector has a signal dc component which is optimized with respect to the noise dc level by the selection of the time constants of the filters as a function of the sweep rate of the local oscillator.

  15. Data acquisition instruments: Psychopharmacology

    SciTech Connect

    Hartley, D.S. III

    1998-01-01

    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  16. Getting Defense Acquisition Right

    DTIC Science & Technology

    2017-01-01

    Technology, and Logis­ tics. In that position , he has been responsible to the Secretary of De­ fense for all matters pertaining to acquisition... position of Director of Tacti­ cal Warfare Programs in the Office of the Secretary of Defense and the position of Assistant Deputy Under Secretary of...Point and holding research and devel­ opment positions . Over the course of his public-service career, Mr. Kendall was awarded the following federal

  17. Defense Acquisition Workforce Modernization

    DTIC Science & Technology

    2010-07-01

    and improve solutions to increasingly complex problems associated with the delivery of public services—a responsibility increasingly required to be...Officer Representatives (CORs). We believe these factors have directly contributed to problems with effective management of DoD acquisition programs...technical skills and experience to ensure the DoD is buying the proper systems and services, in the appropriate manner. To meet these requirements

  18. Advanced Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Perotti, J.

    2003-01-01

    Current and future requirements of the aerospace sensors and transducers field make it necessary for the design and development of new data acquisition devices and instrumentation systems. New designs are sought to incorporate self-health, self-calibrating, self-repair capabilities, allowing greater measurement reliability and extended calibration cycles. With the addition of power management schemes, state-of-the-art data acquisition systems allow data to be processed and presented to the users with increased efficiency and accuracy. The design architecture presented in this paper displays an innovative approach to data acquisition systems. The design incorporates: electronic health self-check, device/system self-calibration, electronics and function self-repair, failure detection and prediction, and power management (reduced power consumption). These requirements are driven by the aerospace industry need to reduce operations and maintenance costs, to accelerate processing time and to provide reliable hardware with minimum costs. The project's design architecture incorporates some commercially available components identified during the market research investigation like: Field Programmable Gate Arrays (FPGA) Programmable Analog Integrated Circuits (PAC IC) and Field Programmable Analog Arrays (FPAA); Digital Signal Processing (DSP) electronic/system control and investigation of specific characteristics found in technologies like: Electronic Component Mean Time Between Failure (MTBF); and Radiation Hardened Component Availability. There are three main sections discussed in the design architecture presented in this document. They are the following: (a) Analog Signal Module Section, (b) Digital Signal/Control Module Section and (c) Power Management Module Section. These sections are discussed in detail in the following pages. This approach to data acquisition systems has resulted in the assignment of patent rights to Kennedy Space Center under U.S. patent # 6

  19. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Technology developed during a joint research program with Langley and Kinetic Systems Corporation led to Kinetic Systems' production of a high speed Computer Automated Measurement and Control (CAMAC) data acquisition system. The study, which involved the use of CAMAC equipment applied to flight simulation, significantly improved the company's technical capability and produced new applications. With Digital Equipment Corporation, Kinetic Systems is marketing the system to government and private companies for flight simulation, fusion research, turbine testing, steelmaking, etc.

  20. Defense ADP Acquisition Study.

    DTIC Science & Technology

    1981-11-30

    through its Institute of Computer Sciences and Technology. The FIPS Publication Series pro- vides general guidelines for numerous specific functions...some of which have constrained the efficient acquisition of ADP. NBS, through 11-24 its Institute for Computer Sciences and Tech- nology, provides...process is what drives the solution orientation and the hardware focus of the DAR. Decentralizing the requirements approval process is a step in the right

  1. United States Acquisition Command

    DTIC Science & Technology

    2009-04-01

    is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...The final root cause deals with the industrial base. The “defense sector saw a raft of mergers and acquisitions during the 1990s.”28

  2. Frames of reference in spatial language acquisition.

    PubMed

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Complexity in language acquisition.

    PubMed

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.

  4. Data acquisition with Masscomp

    SciTech Connect

    Collins, A.J.

    1988-12-31

    Applications and products for data acquisition and control are abundant. Systems and boards for Apple or IBM products collect, store, and manipulate data up to rates in the 10`s of thousands. These systems may suit your application; if so, it would be good for you to obtain one of these systems. However, if you need speed in the hundreds of thousands of samples per second and you want to store, process, and display data in real time, data acquisition becomes much more complex. Operating system code has to be sufficient to handle the load. A company known as Massachusetts Computer Corporation has modified UNIX operating system code to allow real time data acquisition and control. They call this operating system Real Time Unix, or RTU. They have built a family of computer systems around this operating system with specialized hardware to handle multiple processes and quick communications, which a real time operating system needs to function. This paper covers the basics of an application using a Masscomp 5520 computer. The application is for the KYLE Project Cold Tests in SRL. KYLE is a classified weapons program. The data flow from source to Masscomp, the generic features of Masscomp systems, and the specifics of the Masscomp computer related to this application will be presented.

  5. Assessment of language acquisition.

    PubMed

    de Villiers, Peter A; de Villiers, Jill G

    2010-03-01

    This review addresses questions of what should be assessed in language acquisition, and how to do it. The design of a language assessment is crucially connected to its purpose, whether for diagnosis, development of an intervention plan, or for research. Precise profiles of language strengths and weaknesses are required for clear definitions of the phenotypes of particular language and neurodevelopmental disorders. The benefits and costs of formal tests versus language sampling assessments are reviewed. Content validity, theoretically and empirically grounded in child language acquisition, is claimed to be centrally important for appropriate assessment. Without this grounding, links between phenomena can be missed, and interpretations of underlying difficulties can be compromised. Sensitivity and specificity of assessment instruments are often assessed using a gold standard of existing tests and diagnostic practices, but problems arise if that standard is biased against particular groups or dialects. The paper addresses the issues raised by the goal of unbiased assessment of children from diverse linguistic and cultural backgrounds, especially speakers of non-mainstream dialects or bilingual children. A variety of new approaches are discussed for language assessment, including dynamic assessment, experimental tools such as intermodal preferential looking, and training studies that assess generalization. Stress is placed on the need for measures of the process of acquisition rather than just levels of achievement. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  6. Second language acquisition.

    PubMed

    Juffs, Alan

    2011-05-01

    Second language acquisition (SLA) is a field that investigates child and adult SLA from a variety of theoretical perspectives. This article provides a survey of some key areas of concern including formal generative theory and emergentist theory in the areas of morpho-syntax and phonology. The review details the theoretical stance of the two different approaches to the nature of language: generative linguistics and general cognitive approaches. Some results of key acquisition studies from the two theoretical frameworks are discussed. From a generative perspective, constraints on wh-movement, feature geometry and syllable structure, and morphological development are highlighted. From a general cognitive point of view, the emergence of tense and aspect marking from a prototype account of inherent lexical aspect is reviewed. Reference is made to general cognitive learning theories and to sociocultural theory. The article also reviews individual differences research, specifically debate on the critical period in adult language acquisition, motivation, and memory. Finally, the article discusses the relationship between SLA research and second language pedagogy. Suggestions for further reading from recent handbooks on SLA are provided. WIREs Cogni Sci 2011 2 277-286 DOI: 10.1002/wcs.106 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  7. Problem size, parallel architecture and optimal speedup

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Willard, Frank H.

    1987-01-01

    The communication and synchronization overhead inherent in parallel processing can lead to situations where adding processors to the solution method actually increases execution time. Problem type, problem size, and architecture type all affect the optimal number of processors to employ. The numerical solution of an elliptic partial differential equation is examined in order to study the relationship between problem size and architecture. The equation's domain is discretized into n sup 2 grid points which are divided into partitions and mapped onto the individual processor memories. The relationships between grid size, stencil type, partitioning strategy, processor execution time, and communication network type are analytically quantified. In so doing, the optimal number of processors was determined to assign to the solution, and identified (1) the smallest grid size which fully benefits from using all available processors, (2) the leverage on performance given by increasing processor speed or communication network speed, and (3) the suitability of various architectures for large numerical problems.

  8. The role of partial knowledge in statistical word learning.

    PubMed

    Yurovsky, Daniel; Fricker, Damian C; Yu, Chen; Smith, Linda B

    2014-02-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: partial knowledge of one word-object mapping can speed up the acquisition of other word-object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word-object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data.

  9. The role of partial knowledge in statistical word learning

    PubMed Central

    Fricker, Damian C.; Yu, Chen; Smith, Linda B.

    2013-01-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: Partial knowledge of one word–object mapping can speed up the acquisition of other word–object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word–object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data. PMID:23702980

  10. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  11. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  12. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  13. Parallel processing for control applications

    SciTech Connect

    Telford, J. W.

    2001-01-01

    Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

  14. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  15. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  16. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  17. VERB ACQUISITION AND REPRESENTATION IN ALZHEIMER’S DISEASE

    PubMed Central

    Grossman, Murray; Murray, Ryan; Koenig, Phyllis; Ash, Sherry; Cross, Katy; Moore, Peachie; Troiani, Vanessa

    2007-01-01

    We examined the implicit acquisition and mental representation of a novel verb in patients with probable Alzheimer’s disease (AD). Patients were exposed to the new verb in a naturalistic manner as part of a simple picture story. We probed grammatical, semantic and thematic matrix knowledge of the verb soon after presentation and again one week later. We found partial verb acquisition that was retained over one week. AD patients did not differ from controls in their acquisition and retention of a new verb’s major grammatical subcategory, although they acquired little of its semantic properties and displayed minimal acquisition of the new word’s thematic matrix. Moreover, AD patients appeared to maintain their acquired grammatical knowledge over one week. We discuss the implications of these findings from several perspectives, including the modularity of the language processing system, the relationship between episodic memory and semantic memory, and the role of the preserved implicit memory system in AD patients’ partially successful lexical acquisition. PMID:17482652

  18. Partially coherent ultrafast spectrography

    PubMed Central

    Bourassin-Bouchet, C.; Couprie, M.-E.

    2015-01-01

    Modern ultrafast metrology relies on the postulate that the pulse to be measured is fully coherent, that is, that it can be completely described by its spectrum and spectral phase. However, synthesizing fully coherent pulses is not always possible in practice, especially in the domain of emerging ultrashort X-ray sources where temporal metrology is strongly needed. Here we demonstrate how frequency-resolved optical gating (FROG), the first and one of the most widespread techniques for pulse characterization, can be adapted to measure partially coherent pulses even down to the attosecond timescale. No modification of experimental apparatuses is required; only the processing of the measurement changes. To do so, we take our inspiration from other branches of physics where partial coherence is routinely dealt with, such as quantum optics and coherent diffractive imaging. This will have important and immediate applications, such as enabling the measurement of X-ray free-electron laser pulses despite timing jitter. PMID:25744080

  19. Laparoscopic partial splenic resection.

    PubMed

    Uranüs, S; Pfeifer, J; Schauer, C; Kronberger, L; Rabl, H; Ranftl, G; Hauser, H; Bahadori, K

    1995-04-01

    Twenty domestic pigs with an average weight of 30 kg were subjected to laparoscopic partial splenic resection with the aim of determining the feasibility, reliability, and safety of this procedure. Unlike the human spleen, the pig spleen is perpendicular to the body's long axis, and it is long and slender. The parenchyma was severed through the middle third, where the organ is thickest. An 18-mm trocar with a 60-mm Endopath linear cutter was used for the resection. The tissue was removed with a 33-mm trocar. The operation was successfully concluded in all animals. No capsule tears occurred as a result of applying the stapler. Optimal hemostasis was achieved on the resected edges in all animals. Although these findings cannot be extended to human surgery without reservations, we suggest that diagnostic partial resection and minor cyst resections are ideal initial indications for this minimally invasive approach.

  20. Partially coherent ultrafast spectrography

    NASA Astrophysics Data System (ADS)

    Bourassin-Bouchet, C.; Couprie, M.-E.

    2015-03-01

    Modern ultrafast metrology relies on the postulate that the pulse to be measured is fully coherent, that is, that it can be completely described by its spectrum and spectral phase. However, synthesizing fully coherent pulses is not always possible in practice, especially in the domain of emerging ultrashort X-ray sources where temporal metrology is strongly needed. Here we demonstrate how frequency-resolved optical gating (FROG), the first and one of the most widespread techniques for pulse characterization, can be adapted to measure partially coherent pulses even down to the attosecond timescale. No modification of experimental apparatuses is required; only the processing of the measurement changes. To do so, we take our inspiration from other branches of physics where partial coherence is routinely dealt with, such as quantum optics and coherent diffractive imaging. This will have important and immediate applications, such as enabling the measurement of X-ray free-electron laser pulses despite timing jitter.

  1. Hierarchical partial order ranking.

    PubMed

    Carlsen, Lars

    2008-09-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritization of polluted sites is given.

  2. Partially integrated exhaust manifold

    SciTech Connect

    Hayman, Alan W; Baker, Rodney E

    2015-01-20

    A partially integrated manifold assembly is disclosed which improves performance, reduces cost and provides efficient packaging of engine components. The partially integrated manifold assembly includes a first leg extending from a first port and terminating at a mounting flange for an exhaust gas control valve. Multiple additional legs (depending on the total number of cylinders) are integrally formed with the cylinder head assembly and extend from the ports of the associated cylinder and terminate at an exit port flange. These additional legs are longer than the first leg such that the exit port flange is spaced apart from the mounting flange. This configuration provides increased packaging space adjacent the first leg for any valving that may be required to control the direction and destination of exhaust flow in recirculation to an EGR valve or downstream to a catalytic converter.

  3. D0 experiment: its trigger, data acquisition, and computers

    SciTech Connect

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented.

  4. Activated partial thromboplastin time.

    PubMed

    Ignjatovic, Vera

    2013-01-01

    Activated partial thromboplastin time (APTT) is a commonly used coagulation assay that is easy to perform, is affordable, and is therefore performed in most coagulation laboratories, both clinical and research, worldwide. The APTT is based on the principle that in citrated plasma, the addition of a platelet substitute, factor XII activator, and CaCl2 allows for formation of a stable clot. The time required for the formation of a stable clot is recorded in seconds and represents the actual APTT result.

  5. 75 FR 51416 - Defense Federal Acquisition Regulation Supplement; Acquisition of Commercial Items

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-20

    ... Defense Acquisition Regulations System 48 CFR Parts 202, 212, and 234 Defense Federal Acquisition Regulation Supplement; Acquisition of Commercial Items AGENCY: Defense Acquisition Regulations System... interim rule that amended the Defense Federal Acquisition Regulation Supplement (DFARS) to implement...

  6. Parallel, Implicit, Finite Element Solver

    NASA Astrophysics Data System (ADS)

    Lowrie, Weston; Shumlak, Uri; Meier, Eric; Marklin, George

    2007-11-01

    A parallel, implicit, finite element solver is described for solutions to the ideal MHD equations and the Pseudo-1D Euler equations. The solver uses the conservative flux source form of the equations. This helps simplify the discretization of the finite element method by keeping the specification of the physics separate. An implicit time advance is used to allow sufficiently large time steps. The Portable Extensible Toolkit for Scientific Computation (PETSc) is implemented for parallel matrix solvers and parallel data structures. Results for several test cases are described as well as accuracy of the method.

  7. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  8. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  9. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  10. Image-based tracking of optically detunable parallel resonant circuits.

    PubMed

    Eggers, Holger; Weiss, Steffen; Boernert, Peter; Boesiger, Peter

    2003-06-01

    In this work strategies for the robust localization of parallel resonant circuits are investigated. These strategies are based on the subtraction of two images, which ideally differ in signal intensity at the positions of the devices only. To modulate their signal amplification, and thereby generate the local variations, the parallel resonant circuits are alternately detuned and retuned during the acquisition. The integration of photodiodes into the devices permits their fast optical switching. Radial and spiral imaging sequences are modified to provide the data for the two images in addition to those for a conventional image in the same acquisition time. The strategies were evaluated by phantom experiments with stationary and moving catheter-borne devices. In particular, rapid detuning and retuning during the sampling of single profiles is shown to lead to a robust localization. Moreover, this strategy eliminates most of the drawbacks usually associated with image-based tracking, such as low temporal resolution. Image-based tracking may thus become a competitive (if not superior) alternative to projection-based tracking of parallel resonant circuits.

  11. Conjugate Gradients Parallelized on the Hypercube

    NASA Astrophysics Data System (ADS)

    Basermann, Achim

    For the solution of discretized ordinary or partial differential equations it is necessary to solve systems of equations with coefficient matrices of different sparsity pattern, depending on the discretization method; using the finite element method (FE) results in largely unstructured systems of equations. A frequently used iterative solver for systems of equations is the method of conjugate gradients (CG) with different preconditioners. On a multiprocessor system with distributed memory, in particular the data distribution and the communication scheme depending on the used data struture are of greatest importance for the efficient execution of this method. Here, a data distribution and a communication scheme are presented which are based on the analysis of the column indices of the non-zero matrix elements. The performance of the developed parallel CG-method was measured on the distributed-memory-system INTEL iPSC/860 of the Research Centre Jülich with systems of equations from FE-models. The parallel CG-algorithm has been shown to be well suited for both regular and irregular discretization meshes, i.e. for coefficient matrices of very different sparsity pattern.

  12. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    SciTech Connect

    2015-02-19

    ParFELAG is a parallel distributed memory C++ library for numerical upscaling of finite element discretizations. It provides optimal complesity algorithms ro build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured mesh (under the assumption that the topology of the agglomerated entities is correct). Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  13. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    SciTech Connect

    2015-02-19

    ParFELAG is a parallel distributed memory C++ library for numerical upscaling of finite element discretizations. It provides optimal complesity algorithms ro build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured mesh (under the assumption that the topology of the agglomerated entities is correct). Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  14. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  15. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  16. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  17. Solving unstructured grid problems on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1990-01-01

    A highly parallel graph mapping technique that enables one to efficiently solve unstructured grid problems on massively parallel computers is presented. Many implicit and explicit methods for solving discretized partial differential equations require each point in the discretization to exchange data with its neighboring points every time step or iteration. The cost of this communication can negate the high performance promised by massively parallel computing. To eliminate this bottleneck, the graph of the irregular problem is mapped into the graph representing the interconnection topology of the computer such that the sum of the distances that the messages travel is minimized. It is shown that using the heuristic mapping algorithm significantly reduces the communication time compared to a naive assignment of processes to processors.

  18. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  19. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  20. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  1. Distinguishing serial and parallel parsing.

    PubMed

    Gibson, E; Pearlmutter, N J

    2000-03-01

    This paper discusses ways of determining whether the human parser is serial maintaining at most, one structural interpretation at each parse state, or whether it is parallel, maintaining more than one structural interpretation in at least some circumstances. We make four points. The first two counterclaims made by Lewis (2000): (1) that the availability of alternative structures should not vary as a function of the disambiguating material in some ranked parallel models; and (2) that parallel models predict a slow down during the ambiguous region for more syntactically ambiguous structures. Our other points concern potential methods for seeking experimental evidence relevant to the serial/parallel question. We discuss effects of the plausibility of a secondary structure in the ambiguous region (Pearlmutter & Mendelsohn, 1999) and suggest examining the distribution of reaction times in the disambiguating region.

  2. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  3. Parallel programming of industrial applications

    SciTech Connect

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from these applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).

  4. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  5. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  6. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  7. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  8. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  9. Debugging in a parallel environment

    SciTech Connect

    Wasserman, H.J.; Griffin, J.H.

    1985-01-01

    This paper describes the preliminary results of a project investigating approaches to dynamic debugging in parallel processing systems. Debugging programs in a multiprocessing environment is particularly difficult because of potential errors in synchronization of tasks, data dependencies, sharing of data among tasks, and irreproducibility of specific machine instruction sequences from one job to the next. The basic methodology involved in predicate-based debuggers is given as well as other desirable features of dynamic parallel debugging. 13 refs.

  10. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  11. 48 CFR 34.004 - Acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Acquisition strategy. 34.004 Section 34.004 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager...

  12. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  13. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  14. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  15. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  16. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  17. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  18. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  19. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size.

  20. Issues in Tool Acquisition

    DTIC Science & Technology

    1991-09-01

    Cafm-go Mellon wishes to include people Y6 thol refard to race erlc’ riat rlti ongin. sex. hrandcap, religion, Creed, anrecty beief. age. ve’warr...Figure 1-5 Full IPSE Model 21 IAccession For rFNTS -- ’RA& I D.TIC TAF ~ E Dist fcla CMU/SEI-91 -TR-8 i iv CMU/SEI-91 -TR-8 Issues in Tool Acquisition...available, the price for computing resources adequate for CASE tools declines. In addition, the de facto stan- dardization of the Xl 1 windowing system

  1. Data acquisition system

    DOEpatents

    Phillips, David T.

    1979-01-01

    A data acquisition system capable of resolving transient pulses in the subnanosecond range. A pulse in an information carrying medium such as light is transmitted through means which disperse the pulse, such as a fiber optic light guide which time-stretches optical pulses by chromatic dispersion. This time-stretched pulse is used as a sampling pulse and is modulated by the signal to be recorded. The modulated pulse may be further time-stretched prior to being recorded. The recorded modulated pulse is unfolded to derive the transient signal by utilizing the relationship of the time-stretching that occurred in the original pulse.

  2. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  3. First language acquisition.

    PubMed

    Goodluck, Helen

    2011-01-01

    This article reviews current approaches to first language acquisition, arguing in favor of the theory that attributes to the child an innate knowledge of universal grammar. Such knowledge can accommodate the systematic nature of children's non-adult linguistic behaviors. The relationships between performance devices (mechanisms for comprehension and production of speech), non-linguistic aspects of cognition, and child grammars are also discussed. WIREs Cogn Sci 2011 2 47-54 DOI: 10.1002/wcs.95 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  4. Late Mitochondrial Acquisition, Really?

    PubMed Central

    Degli Esposti, Mauro

    2016-01-01

    This article provides a timely critique of a recent Nature paper by Pittis and Gabaldón that has suggested a late origin of mitochondria in eukaryote evolution. It shows that the inferred ancestry of many mitochondrial proteins has been incorrectly assigned by Pittis and Gabaldón to bacteria other than the aerobic proteobacteria from which the ancestor of mitochondria originates, thereby questioning the validity of their suggestion that mitochondrial acquisition may be a late event in eukaryote evolution. The analysis and approach presented here may guide future studies to resolve the true ancestry of mitochondria. PMID:27289097

  5. Acquisition-Management Program

    NASA Technical Reports Server (NTRS)

    Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.

    1987-01-01

    NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.

  6. Acquisition-Management Program

    NASA Technical Reports Server (NTRS)

    Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.

    1987-01-01

    NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.

  7. Implementation of parallel transmit beamforming using orthogonal frequency division multiplexing--achievable resolution and interbeam interference.

    PubMed

    Demi, Libertario; Viti, Jacopo; Kusters, Lieneke; Guidi, Francesco; Tortoli, Piero; Mischi, Massimo

    2013-11-01

    The speed of sound in the human body limits the achievable data acquisition rate of pulsed ultrasound scanners. To overcome this limitation, parallel beamforming techniques are used in ultrasound 2-D and 3-D imaging systems. Different parallel beamforming approaches have been proposed. They may be grouped into two major categories: parallel beamforming in reception and parallel beamforming in transmission. The first category is not optimal for harmonic imaging; the second category may be more easily applied to harmonic imaging. However, inter-beam interference represents an issue. To overcome these shortcomings and exploit the benefit of combining harmonic imaging and high data acquisition rate, a new approach has been recently presented which relies on orthogonal frequency division multiplexing (OFDM) to perform parallel beamforming in transmission. In this paper, parallel transmit beamforming using OFDM is implemented for the first time on an ultrasound scanner. An advanced open platform for ultrasound research is used to investigate the axial resolution and interbeam interference achievable with parallel transmit beamforming using OFDM. Both fundamental and second-harmonic imaging modalities have been considered. Results show that, for fundamental imaging, axial resolution in the order of 2 mm can be achieved in combination with interbeam interference in the order of -30 dB. For second-harmonic imaging, axial resolution in the order of 1 mm can be achieved in combination with interbeam interference in the order of -35 dB.

  8. Partial Southwest Elevation Mill #5 West (Part 3), Partial ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Partial Southwest Elevation - Mill #5 West (Part 3), Partial Southwest Elevation - Mill #5 West (with Section of Courtyard) (Parts 1 & 2) - Boott Cotton Mills, John Street at Merrimack River, Lowell, Middlesex County, MA

  9. Paternalism and partial autonomy.

    PubMed Central

    O'Neill, O

    1984-01-01

    A contrast is often drawn between standard adult capacities for autonomy, which allow informed consent to be given or withheld, and patients' reduced capacities, which demand paternalistic treatment. But patients may not be radically different from the rest of us, in that all human capacities for autonomous action are limited. An adequate account of paternalism and the role that consent and respect for persons can play in medical and other practice has to be developed within an ethical theory that does not impose an idealised picture of unlimited autonomy but allows for the variable and partial character of actual human autonomy. PMID:6520849

  10. 78 FR 67928 - Land Acquisitions: Appeals of Land Acquisition Decisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... well as ``social and financial issues'' affecting the tribal and non-tribal communities, are equally... proposed acquisition's potential impacts on regulatory jurisdiction, real property taxes, and special...

  11. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  12. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  13. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  14. LSI-11 MICROCOMPUTER-BASED DATA ACQUISITION SYSTEM FOR AN OPTICAL MULTICHANNEL ANALYZER

    SciTech Connect

    Chao, J.L.; Harris, C.B.

    1980-04-01

    A microcornputer based operating system for programming and data acquisition from a two a dimensional target optical multichannel analyzer used for high-speed UV/visible spectroscopy is described. The hardware and software interfacing requirements for such a system to provide dedicated real time data acquisition is considered. It is found that a relatively simple parallel interface to an inexpensive microcomputer can be properly configured to perform adequately for high-speed image processing.

  15. Experts' Understanding of Partial Derivatives Using the Partial Derivative Machine

    ERIC Educational Resources Information Center

    Roundy, David; Weber, Eric; Dray, Tevian; Bajracharya, Rabindra R.; Dorko, Allison; Smith, Emily M.; Manogue, Corinne A.

    2015-01-01

    Partial derivatives are used in a variety of different ways within physics. Thermodynamics, in particular, uses partial derivatives in ways that students often find especially confusing. We are at the beginning of a study of the teaching of partial derivatives, with a goal of better aligning the teaching of multivariable calculus with the needs of…

  16. Experts' Understanding of Partial Derivatives Using the Partial Derivative Machine

    ERIC Educational Resources Information Center

    Roundy, David; Weber, Eric; Dray, Tevian; Bajracharya, Rabindra R.; Dorko, Allison; Smith, Emily M.; Manogue, Corinne A.

    2015-01-01

    Partial derivatives are used in a variety of different ways within physics. Thermodynamics, in particular, uses partial derivatives in ways that students often find especially confusing. We are at the beginning of a study of the teaching of partial derivatives, with a goal of better aligning the teaching of multivariable calculus with the needs of…

  17. Data-acquisition systems

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    Up to now, DAPHNE, the data-acquisition system developed for ATLAS, was used routinely for experiments at ATLAS and the Dynamitron. More recently, the Division implemented 2 MSU/DAPHNE systems. The MSU/DAPHNE system is a hybrid data-acquisition system which combines the front-end of the Michigan State University (MSU) DA system with the traditional DAPHNE back-end. The MSU front-end is based on commercially available modules. This alleviates the problems encountered with the DAPHNE front-end which is based on custom designed electronics. The first MSU system was obtained for the APEX experiment and was used there successfully. A second MSU front-end, purchased as a backup for the APEX experiment, was installed as a fully-independent second MSU/DAPHNE system with the procurement of a DEC 3000 Alpha host computer, and was used successfully for data-taking in an experiment at ATLAS. Additional hardware for a third system was bought and will be installed. With the availability of 2 MSU/DAPHNE systems in addition to the existing APEX setup, it is planned that the existing DAPHNE front-end will be decommissioned.

  18. Unsupervised Language Acquisition

    NASA Astrophysics Data System (ADS)

    de Marcken, Carl

    1996-11-01

    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.

  19. A Low Cost Correlator Structure in the Pseudo-Noise Code Acquisition System

    NASA Astrophysics Data System (ADS)

    Lu, Weijun; Li, Ying; Yu, Dunshan; Zhang, Xing

    The critical problem of the pseudo-noise (PN) code acquisition system is the contradiction between the acquisition performance and the calculation complexity. This paper presents a low cost correlator (LCC) structure that can search for two PN code phases in a single accumulation period by eliminating redundant computation. Compared with the part-parallel structure that is composed of two serial correlators (PARALLEL2), the proposed LCC structure has the same performance while saves about 22% chip area and 34% power consumption if uses the Carry-look-ahead (CLA) adder, 17% chip area and 25% power consumption if uses the Ripple-carry (RPL) adder.

  20. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.