Sample records for parallel acquisition technique

  1. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  2. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Parallel MR imaging: a user's guide.

    PubMed

    Glockner, James F; Hu, Houchun H; Stanley, David W; Angelos, Lisa; King, Kevin

    2005-01-01

    Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging. In parallel imaging, the number of sampled k-space lines is reduced, often by a factor of two or greater, thereby significantly shortening the acquisition time. Parallel imaging techniques have only recently become commercially available, and the wide range of clinical applications is just beginning to be explored. The potential clinical applications primarily involve reduction in acquisition time, improved spatial resolution, or a combination of the two. Improvements in image quality can be achieved by reducing the echo train lengths of fast spin-echo and single-shot fast spin-echo sequences. Parallel imaging is particularly attractive for cardiac and vascular applications and will likely prove valuable as 3-T body and cardiovascular imaging becomes part of standard clinical practice. Limitations of parallel imaging include reduced signal-to-noise ratio and reconstruction artifacts. It is important to consider these limitations when deciding when to use these techniques. (c) RSNA, 2005.

  4. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  5. Non-contact single shot elastography using line field low coherence holography

    PubMed Central

    Liu, Chih-Hao; Schill, Alexander; Wu, Chen; Singh, Manmohan; Larin, Kirill V.

    2016-01-01

    Optical elastic wave imaging is a powerful technique that can quantify local biomechanical properties of tissues. However, typically long acquisition times make this technique unfeasible for clinical use. Here, we demonstrate non-contact single shot elastographic holography using a line-field interferometer integrated with an air-pulse delivery system. The propagation of the air-pulse induced elastic wave was imaged in real time, and required a single excitation for a line-scan measurement. Results on tissue-mimicking phantoms and chicken breast muscle demonstrated the feasibility of this technique for accurate assessment of tissue biomechanical properties with an acquisition time of a few milliseconds using parallel acquisition. PMID:27570694

  6. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  7. In vitro and in vivo tissue harmonic images obtained with parallel transmit beamforming by means of orthogonal frequency division multiplexing.

    PubMed

    Demi, Libertario; Ramalli, Alessandro; Giannini, Gabriele; Mischi, Massimo

    2015-01-01

    In classic pulse-echo ultrasound imaging, the data acquisition rate is limited by the speed of sound. To overcome this, parallel beamforming techniques in transmit (PBT) and in receive (PBR) mode have been proposed. In particular, PBT techniques, based on the transmission of focused beams, are more suitable for harmonic imaging because they are capable of generating stronger harmonics. Recently, orthogonal frequency division multiplexing (OFDM) has been investigated as a means to obtain parallel beamformed tissue harmonic images. To date, only numerical studies and experiments in water have been performed, hence neglecting the effect of frequencydependent absorption. Here we present the first in vitro and in vivo tissue harmonic images obtained with PBT by means of OFDM, and we compare the results with classic B-mode tissue harmonic imaging. The resulting contrast-to-noise ratio, here used as a performance metric, is comparable. A reduction by 2 dB is observed for the case in which three parallel lines are reconstructed. In conclusion, the applicability of this technique to ultrasonography as a means to improve the data acquisition rate is confirmed.

  8. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  9. Accelerated T1ρ acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study.

    PubMed

    Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan

    2016-03-01

    Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.

  10. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  11. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    PubMed

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  12. Self-calibrated Multiple-echo Acquisition with Radial Trajectories using the Conjugate Gradient Method (SMART-CG)

    PubMed Central

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.

    2011-01-01

    Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967

  13. Motion correction in periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and turboprop MRI.

    PubMed

    Tamhane, Ashish A; Arfanakis, Konstantinos

    2009-07-01

    Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.

  14. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction.

    PubMed

    Rossi Espagnet, M C; Bangiyev, L; Haber, M; Block, K T; Babb, J; Ruggiero, V; Boada, F; Gonen, O; Fatterpekar, G M

    2015-08-01

    The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. © 2015 by American Journal of Neuroradiology.

  15. High-Resolution DCE-MRI of the Pituitary Gland Using Radial k-Space Acquisition with Compressed Sensing Reconstruction

    PubMed Central

    Rossi Espagnet, M.C.; Bangiyev, L.; Haber, M.; Block, K.T.; Babb, J.; Ruggiero, V.; Boada, F.; Gonen, O.; Fatterpekar, G.M.

    2015-01-01

    BACKGROUNDANDPURPOSE The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. MATERIALS AND METHODS A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. RESULTS Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. CONCLUSIONS This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. PMID:25953760

  16. A Parallel Spectroscopic Method for Examining Dynamic Phenomena on the Millisecond Time Scale

    PubMed Central

    Snively, Christopher M.; Chase, D. Bruce; Rabolt, John F.

    2009-01-01

    An infrared spectroscopic technique based on planar array infrared (PAIR) spectroscopy has been developed that allows the acquisition of spectra from multiple samples simultaneously. Using this technique, it is possible to acquire spectra over a spectral range of 950–1900cm−1 with a temporal resolution of 2.2ms. The performance of this system was demonstrated by determining the shear-induced orientational response of several low molecular weight liquid crystals. Five different liquid crystals were examined in combination with five different alignment layers, and both primary and secondary screens were demonstrated. Implementation of this high throughput PAIR technique resulted in a reduction in acquisition time as compared to both step-scan and ultra-rapid-scanning FTIR spectroscopy. PMID:19239197

  17. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  18. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  19. Material parameter measurements at high temperatures

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.; Peters, L., Jr.

    1988-01-01

    Alternate fixtures of techniques for the measurement of the constitutive material parameters at elevated temperatures are presented. The technique utilizes scattered field data from material coated cylinders between parallel plates or material coated hemispheres over a finite size groundplane. The data acquisition is centered around the HP 8510B Network Analyzer. The parameters are then found from a numerical search algorithm using the Newton-Ralphson technique with the measured and calculated fields from these canonical scatters. Numerical and experimental results are shown.

  20. Implementation of parallel transmit beamforming using orthogonal frequency division multiplexing--achievable resolution and interbeam interference.

    PubMed

    Demi, Libertario; Viti, Jacopo; Kusters, Lieneke; Guidi, Francesco; Tortoli, Piero; Mischi, Massimo

    2013-11-01

    The speed of sound in the human body limits the achievable data acquisition rate of pulsed ultrasound scanners. To overcome this limitation, parallel beamforming techniques are used in ultrasound 2-D and 3-D imaging systems. Different parallel beamforming approaches have been proposed. They may be grouped into two major categories: parallel beamforming in reception and parallel beamforming in transmission. The first category is not optimal for harmonic imaging; the second category may be more easily applied to harmonic imaging. However, inter-beam interference represents an issue. To overcome these shortcomings and exploit the benefit of combining harmonic imaging and high data acquisition rate, a new approach has been recently presented which relies on orthogonal frequency division multiplexing (OFDM) to perform parallel beamforming in transmission. In this paper, parallel transmit beamforming using OFDM is implemented for the first time on an ultrasound scanner. An advanced open platform for ultrasound research is used to investigate the axial resolution and interbeam interference achievable with parallel transmit beamforming using OFDM. Both fundamental and second-harmonic imaging modalities have been considered. Results show that, for fundamental imaging, axial resolution in the order of 2 mm can be achieved in combination with interbeam interference in the order of -30 dB. For second-harmonic imaging, axial resolution in the order of 1 mm can be achieved in combination with interbeam interference in the order of -35 dB.

  1. Measuring signal-to-noise ratio in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Clarke, Geoffrey D.

    2011-01-01

    Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049

  2. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    PubMed

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P < 0.001; anatomical precision, P ≤ 0.005). Simultaneous multislice EPI with blipped controlled aliasing in parallel imaging results in higher acceleration can remarkably reduce acquisition time in DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  3. Phased array ghost elimination.

    PubMed

    Kellman, Peter; McVeigh, Elliot R

    2006-05-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.

  4. Phased array ghost elimination

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636

  5. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  6. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments

    NASA Astrophysics Data System (ADS)

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  7. Assessment of the radioanatomic positioning of the osteoarthritic knee in serial radiographs: comparison of three acquisition techniques.

    PubMed

    Le Graverand, M-P H; Mazzuca, S; Lassere, M; Guermazi, A; Pickering, E; Brandt, K; Peterfy, C; Cline, G; Nevitt, M; Woodworth, T; Conaghan, P; Vignon, E

    2006-01-01

    Recent studies using various standardized radiographic acquisition techniques have demonstrated the necessity of reproducible radioanatomic alignment of the knee to assure precise measurements of medial tibiofemoral joint space width (JSW). The objective of the present study was to characterize the longitudinal performance of several acquisition techniques with respect to long-term reproducibility of positioning of the knee, and the impact of changes in positioning on the rate and variability of joint space narrowing (JSN). Eighty subjects were randomly selected from each of three cohorts followed in recent studies of the radiographic progression of knee osteoarthritis (OA): the Health ABC study (paired fixed-flexion [FF] radiographs taken at a 36-month interval); the Glucosamine Arthritis Intervention Trial (GAIT) (paired metatarsophalangeal [MTP] radiographs obtained at a 12-month interval), and a randomized clinical trial of doxycycline (fluoroscopically assisted semiflexed anteroposterior (AP) radiographs taken at a 16-month interval). Manual measurements were obtained from each radiograph to represent markers of radioanatomic positioning of the knee (alignment of the medial tibial plateau and X-ray beam, knee rotation, femorotibial angle) and to evaluate minimum JSW (mJSW) in the medial tibiofemoral compartment. The effects on the mean annualized rate of JSN and on the variability of that rate of highly reproduced vs variable positioning of the knee in serial radiographs were evaluated. Parallel or near-parallel alignment was achieved significantly more frequently with the fluoroscopically guided positioning used in the semiflexed AP protocol than with either the non-fluoroscopic FF or MTP protocol (68% vs 14% for both FF and MTP protocols when measured at the midpoint of the medial compartment; 75% vs 26% and 34% for the FF and MTP protocols, respectively, when measured at the site of mJSW; P<0.001 for each). Knee rotation was reproduced more frequently in semiflexed AP radiographs than in FF radiographs (66% vs 45%, P<0.01). In contrast, the FF technique yielded a greater proportion of paired radiographs in which the femorotibial angle was accurately reproduced than the semiflexed AP or MTP protocol (78% vs 59% and 56%, respectively, P<0.01 for each). Notably, only paired radiographs with parallel or near-parallel alignment exhibited a mean rate of JSN (+/-SD) in the OA knee that was more rapid and less variable than that measured in all knees (0.186+/-0.274 mm/year, standardized response to mean [SRM]=0.68 vs 0.128+/-0.291 mm/year, SRM=0.44). This study confirms the importance of parallel radioanatomic alignment of the anterior and posterior margins of the medial tibial plateau in detecting JSN in subjects with knee OA. The use of radiographic methods that assure parallel alignment during serial X-ray examinations will permit the design of more efficient studies of biomarkers of OA progression and of structure modification in knee OA.

  8. Parallel Architecture, Parallel Acquisition Cross-Linguistic Evidence from Nominal and Verbal Domains

    ERIC Educational Resources Information Center

    Sutton, Brett R.

    2017-01-01

    This dissertation explores parallels between Complementizer Phrase (CP) and Determiner Phrase (DP) semantics, syntax, and morphology--including similarities in case-assignment, subject-verb and possessor-possessum agreement, subject and possessor semantics, and overall syntactic structure--in first language acquisition. Applying theoretical…

  9. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  10. Comparison of quartz crystallographic preferred orientations identified with optical fabric analysis, electron backscatter and neutron diffraction techniques.

    PubMed

    Hunter, N J R; Wilson, C J L; Luzin, V

    2017-02-01

    Three techniques are used to measure crystallographic preferred orientations (CPO) in a naturally deformed quartz mylonite: transmitted light cross-polarized microscopy using an automated fabric analyser, electron backscatter diffraction (EBSD) and neutron diffraction. Pole figure densities attributable to crystal-plastic deformation are variably recognizable across the techniques, particularly between fabric analyser and diffraction instruments. Although fabric analyser techniques offer rapid acquisition with minimal sample preparation, difficulties may exist when gathering orientation data parallel with the incident beam. Overall, we have found that EBSD and fabric analyser techniques are best suited for studying CPO distributions at the grain scale, where individual orientations can be linked to their source grain or nearest neighbours. Neutron diffraction serves as the best qualitative and quantitative means of estimating the bulk CPO, due to its three-dimensional data acquisition, greater sample area coverage, and larger sample size. However, a number of sampling methods can be applied to FA and EBSD data to make similar approximations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  11. Integrated electronics for time-resolved array of single-photon avalanche diodes

    NASA Astrophysics Data System (ADS)

    Acconcia, G.; Crotti, M.; Rech, I.; Ghioni, M.

    2013-12-01

    The Time Correlated Single Photon Counting (TCSPC) technique has reached a prominent position among analytical methods employed in a great variety of fields, from medicine and biology (fluorescence spectroscopy) to telemetry (laser ranging) and communication (quantum cryptography). Nevertheless the development of TCSPC acquisition systems featuring both a high number of parallel channels and very high performance is still an open challenge: to satisfy the tight requirements set by the applications, a fully parallel acquisition system requires not only high efficiency single photon detectors but also a read-out electronics specifically designed to obtain the highest performance in conjunction with these sensors. To this aim three main blocks have been designed: a gigahertz bandwidth front-end stage to directly read the custom technology SPAD array avalanche current, a reconfigurable logic to route the detectors output signals to the acquisition chain and an array of time measurement circuits capable of recording the photon arrival times with picoseconds time resolution and a very high linearity. An innovative architecture based on these three circuits will feature a very high number of detectors to perform a truly parallel spatial or spectral analysis and a smaller number of high performance time-to-amplitude converter offering very high performance and a very high conversion frequency while limiting the area occupation and power dissipation. The routing logic will make the dynamic connection between the two arrays possible in order to guarantee that no information gets lost.

  12. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  13. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods

    PubMed Central

    Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908

  14. Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.

    PubMed

    Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian

    2012-01-01

    Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.

  15. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    PubMed

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field strengths. At 1.5 T the peak signal-to-noise ratio of the ERS was significantly lower in comparison to the IRS and the VS technique (14.6 vs. 26.7 and 39.6 respectively, P < 0.004). Using the IRS sampling algorithm comparable image quality and SNR can be achieved at 1.5 and 3 T. At 1.5 T VS offers the best possible solution for the conflicting requirements between a further increased temporal resolution and image quality. In consequence the gain of increased scanning efficiency from advanced k[r]-space sampling acquisition techniques can be exploited for a further improvement of image quality of pulmonary perfusion MRI.

  16. Single-spin stochastic optical reconstruction microscopy

    PubMed Central

    Pfender, Matthias; Aslam, Nabeel; Waldherr, Gerald; Neumann, Philipp; Wrachtrup, Jörg

    2014-01-01

    We experimentally demonstrate precision addressing of single-quantum emitters by combined optical microscopy and spin resonance techniques. To this end, we use nitrogen vacancy (NV) color centers in diamond confined within a few ten nanometers as individually resolvable quantum systems. By developing a stochastic optical reconstruction microscopy (STORM) technique for NV centers, we are able to simultaneously perform sub–diffraction-limit imaging and optically detected spin resonance (ODMR) measurements on NV spins. This allows the assignment of spin resonance spectra to individual NV center locations with nanometer-scale resolution and thus further improves spatial discrimination. For example, we resolved formerly indistinguishable emitters by their spectra. Furthermore, ODMR spectra contain metrology information allowing for sub–diffraction-limit sensing of, for instance, magnetic or electric fields with inherently parallel data acquisition. As an example, we have detected nuclear spins with nanometer-scale precision. Finally, we give prospects of how this technique can evolve into a fully parallel quantum sensor for nanometer resolution imaging of delocalized quantum correlations. PMID:25267655

  17. Development of fast parallel multi-technique scanning X-ray imaging at Synchrotron Soleil

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Leclercq, N.; Langlois, F.; Buteau, A.; Lé, S.; Poirier, S.; Mercère, P.; Kewish, C. M.; Somogyi, A.

    2013-10-01

    A fast multimodal scanning X-ray imaging scheme is prototyped at Soleil Synchrotron. It permits the simultaneous acquisition of complementary information on the sample structure, composition and chemistry by measuring transmission, differential phase contrast, small-angle scattering, and X-ray fluorescence by dedicated detectors with ms dwell time per pixel. The results of the proof of principle experiments are presented in this paper.

  18. Coronary Artery Anomalies and Variants: Technical Feasibility of Assessment with Coronary MR Angiography at 3 T1

    PubMed Central

    Gharib, Ahmed M.; Ho, Vincent B.; Rosing, Douglas R.; Herzka, Daniel A.; Stuber, Matthias; Arai, Andrew E.; Pettigrew, Roderic I.

    2008-01-01

    The purpose of this study was to prospectively use a whole-heart three-dimensional (3D) coronary magnetic resonance (MR) angiography technique specifically adapted for use at 3 T and a parallel imaging technique (sensitivity encoding) to evaluate coronary arterial anomalies and variants (CAAV). This HIPAA-compliant study was approved by the local institutional review board, and informed consent was obtained from all participants. Twenty-two participants (11 men, 11 women; age range, 18–62 years) were included. Ten participants were healthy volunteers, whereas 12 participants were patients suspected of having CAAV. Coronary MR angiography was performed with a 3-T MR imager. A 3D free-breathing navigator-gated and vector electrocardiographically–gated segmented k-space gradient-echo sequence with adiabatic T2 preparation pulse and parallel imaging (sensitivity encoding) was used. Whole-heart acquisitions (repetition time msec/echo time msec, 4/1.35; 20° flip angle; 1 × 1 × 2-mm acquired voxel size) lasted 10–12 minutes. Mean examination time was 41 minutes ± 14 (standard deviation). Findings included aneurysms, ectasia, arteriovenous fistulas, and anomalous origins. The 3D whole-heart acquisitions developed for use with 3 T are feasible for use in the assessment of CAAV. © RSNA, 2008 PMID:18372470

  19. Parallel magnetic resonance imaging using coils with localized sensitivities.

    PubMed

    Goldfarb, James W; Holland, Agnes E

    2004-09-01

    The purpose of this study was to present clinical examples and illustrate the inefficiencies of a conventional reconstruction using a commercially available phased array coil with localized sensitivities. Five patients were imaged at 1.5 T using a cardiac-synchronized gadolinium-enhanced acquisition and a commercially available four-element phased array coil. Four unique sets of images were reconstructed from the acquired k-space data: (a) sum-of-squares image using four elements of the coil; localized sum-of-squares images from the (b) anterior coils and (c) posterior coils and a (c) local reconstruction. Images were analyzed for artifacts and usable field-of-view. Conventional image reconstruction produced images with fold-over artifacts in all cases spanning a portion of the image (mean 90 mm; range 36-126 mm). The local reconstruction removed fold-over artifacts and resulted in an effective increase in the field-of-view (mean 50%; range 20-70%). Commercially available phased array coils do not always have overlapping sensitivities. Fold-over artifacts can be removed using an alternate reconstruction method. When assessing the advantages of parallel imaging techniques, gains achieved using techniques such as SENSE and SMASH should be gauged against the acquisition time of the localized method rather than the conventional sum-of-squares method.

  20. A wireless data acquisition system for acoustic emission testing

    NASA Astrophysics Data System (ADS)

    Zimmerman, A. T.; Lynch, J. P.

    2013-01-01

    As structural health monitoring (SHM) systems have seen increased demand due to lower costs and greater capabilities, wireless technologies have emerged that enable the dense distribution of transducers and the distributed processing of sensor data. In parallel, ultrasonic techniques such as acoustic emission (AE) testing have become increasingly popular in the non-destructive evaluation of materials and structures. These techniques, which involve the analysis of frequency content between 1 kHz and 1 MHz, have proven effective in detecting the onset of cracking and other early-stage failure in active structures such as airplanes in flight. However, these techniques typically involve the use of expensive and bulky monitoring equipment capable of accurately sensing AE signals at sampling rates greater than 1 million samples per second. In this paper, a wireless data acquisition system is presented that is capable of collecting, storing, and processing AE data at rates of up to 20 MHz. Processed results can then be wirelessly transmitted in real-time, creating a system that enables the use of ultrasonic techniques in large-scale SHM systems.

  1. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    NASA Astrophysics Data System (ADS)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  2. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less

  3. Whole left ventricular functional assessment from two minutes free breathing multi-slice CINE acquisition

    NASA Astrophysics Data System (ADS)

    Usman, M.; Atkinson, D.; Heathfield, E.; Greil, G.; Schaeffter, T.; Prieto, C.

    2015-04-01

    Two major challenges in cardiovascular MRI are long scan times due to slow MR acquisition and motion artefacts due to respiratory motion. Recently, a Motion Corrected-Compressed Sensing (MC-CS) technique has been proposed for free breathing 2D dynamic cardiac MRI that addresses these challenges by simultaneously accelerating MR acquisition and correcting for any arbitrary motion in a compressed sensing reconstruction. In this work, the MC-CS framework is combined with parallel imaging for further acceleration, and is termed Motion Corrected Sparse SENSE (MC-SS). Validation of the MC-SS framework is demonstrated in eight volunteers and three patients for left ventricular functional assessment and results are compared with the breath-hold acquisitions as reference. A non-significant difference (P > 0.05) was observed in the volumetric functional measurements (end diastolic volume, end systolic volume, ejection fraction) and myocardial border sharpness values obtained with the proposed and gold standard methods. The proposed method achieves whole heart multi-slice coverage in 2 min under free breathing acquisition eliminating the time needed between breath-holds for instructions and recovery. This results in two-fold speed up of the total acquisition time in comparison to the breath-hold acquisition.

  4. Every factor helps: Rapid Ptychographic Reconstruction

    NASA Astrophysics Data System (ADS)

    Nashed, Youssef

    2015-03-01

    Recent advances in microscopy, specifically higher spatial resolution and data acquisition rates, require faster and more robust phase retrieval reconstruction methods. Ptychography is a phase retrieval technique for reconstructing the complex transmission function of a specimen from a sequence of diffraction patterns in visible light, X-ray, and electron microscopes. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes. Waiting to postprocess datasets offline results in missed opportunities. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs). A final specimen reconstruction is then achieved by different techniques to merge sub-dataset results into a single complex phase and amplitude image. Results are shown on a simulated specimen and real datasets from X-ray experiments conducted at a synchrotron light source.

  5. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  6. Sinusoidal echo-planar imaging with parallel acquisition technique for reduced acoustic noise in auditory fMRI.

    PubMed

    Zapp, Jascha; Schmitter, Sebastian; Schad, Lothar R

    2012-09-01

    To extend the parameter restrictions of a silent echo-planar imaging (sEPI) sequence using sinusoidal readout (RO) gradients, in particular with increased spatial resolution. The sound pressure level (SPL) of the most feasible configurations is compared to conventional EPI having trapezoidal RO gradients. We enhanced the sEPI sequence by integrating a parallel acquisition technique (PAT) on a 3 T magnetic resonance imaging (MRI) system. The SPL was measured for matrix sizes of 64 × 64 and 128 × 128 pixels, without and with PAT (R = 2). The signal-to-noise ratio (SNR) was examined for both sinusoidal and trapezoidal RO gradients. Compared to EPI PAT, the SPL could be reduced by up to 11.1 dB and 5.1 dB for matrix sizes of 64 × 64 and 128 × 128 pixels, respectively. The SNR of sinusoidal RO gradients is lower by a factor of 0.96 on average compared to trapezoidal RO gradients. The sEPI PAT sequence allows for 1) increased resolution, 2) expanded RO frequency range toward lower frequencies, which is in general beneficial for SPL, or 3) shortened TE, TR, and RO train length. At the same time, it generates lower SPL compared to conventional EPI for a wide range of RO frequencies while having the same imaging parameters. Copyright © 2012 Wiley Periodicals, Inc.

  7. A feasibility study for compressed sensing combined phase contrast MR angiography reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo

    2012-02-01

    Phase contrast magnetic resonance angiography (PC MRA) is a technique for flow velocity measurement and vessels visualization, simultaneously. The PC MRA takes long scan time because each flow encoding gradients which are composed bipolar gradient type need to reconstruct the angiography image. Moreover, it takes more image acquisition time when we use the PC MRA at the low-tesla MRI system. In this study, we studied and evaluation of feasibility for CS MRI reconstruction combined PC MRA which data acquired by low-tesla MRI system. We used non-linear reconstruction algorithm which named Bregman iteration for CS image reconstruction and validate the usefulness of CS combined PC MRA reconstruction technique. The results of CS reconstructed PC MRA images provide similar level of image quality between fully sampled reconstruction data and sparse sampled reconstruction using CS technique. Although our results used half of sampling ratio and do not used specification hardware device or performance which are improving the temporal resolution of MR image acquisition such as parallel imaging reconstruction using phased array coil or non-cartesian trajectory, we think that CS combined PC MRA technique will be helpful to increase the temporal resolution and at low-tesla MRI system.

  8. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  9. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  10. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  11. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging.

    PubMed

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves

    2012-05-01

    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  12. A high speed multifocal multiphoton fluorescence lifetime imaging microscope for live-cell FRET imaging

    PubMed Central

    Poland, Simon P.; Krstajić, Nikola; Monypenny, James; Coelho, Simao; Tyndall, David; Walker, Richard J.; Devauges, Viviane; Richardson, Justin; Dutton, Neale; Barber, Paul; Li, David Day-Uei; Suhling, Klaus; Ng, Tony; Henderson, Robert K.; Ameer-Beg, Simon M.

    2015-01-01

    We demonstrate diffraction limited multiphoton imaging in a massively parallel, fully addressable time-resolved multi-beam multiphoton microscope capable of producing fluorescence lifetime images with sub-50ps temporal resolution. This imaging platform offers a significant improvement in acquisition speed over single-beam laser scanning FLIM by a factor of 64 without compromising in either the temporal or spatial resolutions of the system. We demonstrate FLIM acquisition at 500 ms with live cells expressing green fluorescent protein. The applicability of the technique to imaging protein-protein interactions in live cells is exemplified by observation of time-dependent FRET between the epidermal growth factor receptor (EGFR) and the adapter protein Grb2 following stimulation with the receptor ligand. Furthermore, ligand-dependent association of HER2-HER3 receptor tyrosine kinases was observed on a similar timescale and involved the internalisation and accumulation or receptor heterodimers within endosomes. These data demonstrate the broad applicability of this novel FLIM technique to the spatio-temporal dynamics of protein-protein interaction. PMID:25780724

  13. Strategies to minimize sedation in pediatric body magnetic resonance imaging.

    PubMed

    Jaimes, Camilo; Gee, Michael S

    2016-05-01

    The high soft-tissue contrast of MRI and the absence of ionizing radiation make it a valuable tool for assessment of body pathology in children. Infants and young children are often unable to cooperate with awake MRI so sedation or general anesthesia might be required. However, given recent data on the costs and potential risks of anesthesia in young children, there is a need to try to decrease or avoid sedation in this population when possible. Child life specialists in radiology frequently use behavioral techniques and audiovisual support devices, and they practice with children and families using mock scanners to improve child compliance with MRI. Optimization of the MR scanner environment is also important to create a child-friendly space. If the child can remain inside the MRI scanner, a variety of emerging techniques can reduce the effect of involuntary motion. Using sequences with short acquisition times such as single-shot fast spin echo and volumetric gradient echo can decrease artifacts and improve image quality. Breath-holding, respiratory triggering and signal averaging all reduce respiratory motion. Emerging techniques such as radial and multislice k-space acquisition, navigator motion correction, as well as parallel imaging and compressed sensing reconstruction methods can further accelerate acquisition and decrease motion. Collaboration among radiologists, anesthesiologists, technologists, child life specialists and families is crucial for successful performance of MRI in young children.

  14. Velocity navigator for motion compensated thermometry.

    PubMed

    Maier, Florian; Krafft, Axel J; Yung, Joshua P; Stafford, R Jason; Elliott, Andrew; Dillmann, Rüdiger; Semmler, Wolfhard; Bock, Michael

    2012-02-01

    Proton resonance frequency shift thermometry is sensitive to breathing motion that leads to incorrect phase differences. In this work, a novel velocity-sensitive navigator technique for triggering MR thermometry image acquisition is presented. A segmented echo planar imaging pulse sequence was modified for velocity-triggered temperature mapping. Trigger events were generated when the estimated velocity value was less than 0.2 cm/s during the slowdown phase in parallel to the velocity-encoding direction. To remove remaining high-frequency spikes from pulsation in real time, a Kalman filter was applied to the velocity navigator data. A phantom experiment with heating and an initial volunteer experiment without heating were performed to show the applicability of this technique. Additionally, a breath-hold experiment was conducted for comparison. A temperature rise of ΔT = +37.3°C was seen in the phantom experiment, and a root mean square error (RMSE) outside the heated region of 2.3°C could be obtained for periodic motion. In the volunteer experiment, a RMSE of 2.7°C/2.9°C (triggered vs. breath hold) was measured. A novel velocity navigator with Kalman filter postprocessing in real time significantly improves the temperature accuracy over non-triggered acquisitions and suggests being comparable to a breath-held acquisition. The proposed technique might be clinically applied for monitoring of thermal ablations in abdominal organs.

  15. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  16. Simultaneous Multi-Slice fMRI using Spiral Trajectories

    PubMed Central

    Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew

    2014-01-01

    Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259

  17. Second Language Acquisition: Possible Insights from Studies on How Birds Acquire Song.

    ERIC Educational Resources Information Center

    Neapolitan, Denise M.; And Others

    1988-01-01

    Reviews research that demonstrates parallels between general linguistic and cognitive processes in human language acquisition and avian acquisition of song and discusses how such research may provide new insights into the processes of second-language acquisition. (Author/CB)

  18. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    PubMed

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach.

    PubMed

    Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago

    2015-01-01

    Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel.

  20. Impact of MR Acquisition Parameters on DTI Scalar Indexes: A Tractography Based Approach

    PubMed Central

    Barrio-Arranz, Gonzalo; de Luis-García, Rodrigo; Tristán-Vega, Antonio; Martín-Fernández, Marcos; Aja-Fernández, Santiago

    2015-01-01

    Acquisition parameters play a crucial role in Diffusion Tensor Imaging (DTI), as they have a major impact on the values of scalar measures such as Fractional Anisotropy (FA) or Mean Diffusivity (MD) that are usually the focus of clinical studies based on white matter analysis. This paper presents an analysis on the impact of the variation of several acquisition parameters on these scalar measures with a novel double focus. First, a tractography-based approach is employed, motivated by the significant number of clinical studies that are carried out using this technique. Second, the consequences of simultaneous changes in multiple parameters are analyzed: number of gradient directions, b-value and voxel resolution. Results indicate that the FA is most affected by changes in the number of gradients and voxel resolution, while MD is specially influenced by variations in the b-value. Even if the choice of a tractography algorithm has an effect on the numerical values of the final scalar measures, the evolution of these measures when acquisition parameters are modified is parallel. PMID:26457415

  1. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  2. A fast multiparameter MRI approach for acute stroke assessment on a 3T clinical scanner: preliminary results in a non-human primate model with transient ischemic occlusion.

    PubMed

    Zhang, Xiaodong; Tong, Frank; Li, Chun-Xia; Yan, Yumei; Nair, Govind; Nagaoka, Tsukasa; Tanaka, Yoji; Zola, Stuart; Howell, Leonard

    2014-04-01

    Many MRI parameters have been explored and demonstrated the capability or potential to evaluate acute stroke injury, providing anatomical, microstructural, functional, or neurochemical information for diagnostic purposes and therapeutic development. However, the application of multiparameter MRI approach is hindered in clinic due to the very limited time window after stroke insult. Parallel imaging technique can accelerate MRI data acquisition dramatically and has been incorporated in modern clinical scanners and increasingly applied for various diagnostic purposes. In the present study, a fast multiparameter MRI approach including structural T1-weighted imaging (T1W), T2-weighted imaging (T2W), diffusion tensor imaging (DTI), T2-mapping, proton magnetic resonance spectroscopy, cerebral blood flow (CBF), and magnetization transfer (MT) imaging, was implemented and optimized for assessing acute stroke injury on a 3T clinical scanner. A macaque model of transient ischemic stroke induced by a minimal interventional approach was utilized for evaluating the multiparameter MRI approach. The preliminary results indicate the surgical procedure successfully induced ischemic occlusion in the cortex and/or subcortex in adult macaque monkeys (n=4). Application of parallel imaging technique substantially reduced the scanning duration of most MRI data acquisitions, allowing for fast and repeated evaluation of acute stroke injury. Hence, the use of the multiparameter MRI approach with up to five quantitative measures can provide significant advantages in preclinical or clinical studies of stroke disease.

  3. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  4. Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing

    PubMed Central

    Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.

    2013-01-01

    Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285

  5. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  6. Data Acquisition with GPUs: The DAQ for the Muon $g$-$2$ Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohn, W.

    Graphical Processing Units (GPUs) have recently become a valuable computing tool for the acquisition of data at high rates and for a relatively low cost. The devices work by parallelizing the code into thousands of threads, each executing a simple process, such as identifying pulses from a waveform digitizer. The CUDA programming library can be used to effectively write code to parallelize such tasks on Nvidia GPUs, providing a significant upgrade in performance over CPU based acquisition systems. The muonmore » $g$-$2$ experiment at Fermilab is heavily relying on GPUs to process its data. The data acquisition system for this experiment must have the ability to create deadtime-free records from 700 $$\\mu$$s muon spills at a raw data rate 18 GB per second. Data will be collected using 1296 channels of $$\\mu$$TCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording of the muon decays during the spill. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.« less

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  8. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  9. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  10. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  11. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  12. 1 μs broadband frequency sweeping reflectometry for plasma density and fluctuation profile measurements

    NASA Astrophysics Data System (ADS)

    Clairet, F.; Bottereau, C.; Medvedeva, A.; Molina, D.; Conway, G. D.; Silva, A.; Stroth, U.; ASDEX Upgrade Team; Tore Supra Team; Eurofusion Mst1 Team

    2017-11-01

    Frequency swept reflectometry has reached the symbolic value of 1 μs sweeping time; this performance has been made possible, thanks to an improved control of the ramp voltage driving the frequency source. In parallel, the memory depth of the acquisition system has been upgraded and can provide up to 200 000 signals during a plasma discharge. Additional improvements regarding the trigger delay determination of the acquisition and the voltage ramp linearity required by this ultra-fast technique have been set. While this diagnostic is traditionally dedicated to the plasma electron density profile measurement, such a fast sweeping rate can provide the study of fast plasma events and turbulence with unprecedented time and radial resolution from the edge to the core. Experimental results obtained on ASDEX Upgrade plasmas are presented to demonstrate the performances of the diagnostic.

  13. Hexagonal undersampling for faster MRI near metallic implants.

    PubMed

    Sveinsson, Bragi; Worters, Pauline W; Gold, Garry E; Hargreaves, Brian A

    2015-02-01

    Slice encoding for metal artifact correction acquires a three-dimensional image of each excited slice with view-angle tilting to reduce slice and readout direction artifacts respectively, but requires additional imaging time. The purpose of this study was to provide a technique for faster imaging around metallic implants by undersampling k-space. Assuming that areas of slice distortion are localized, hexagonal sampling can reduce imaging time by 50% compared with conventional scans. This work demonstrates this technique by comparisons of fully sampled images with undersampled images, either from simulations from fully acquired data or from data actually undersampled during acquisition, in patients and phantoms. Hexagonal sampling is also shown to be compatible with parallel imaging and partial Fourier acquisitions. Image quality was evaluated using a structural similarity (SSIM) index. Images acquired with hexagonal undersampling had no visible difference in artifact suppression from fully sampled images. The SSIM index indicated high similarity to fully sampled images in all cases. The study demonstrates the ability to reduce scan time by undersampling without compromising image quality. © 2014 Wiley Periodicals, Inc.

  14. Techniques for the rapid display and manipulation of 3-D biomedical data.

    PubMed

    Goldwasser, S M; Reynolds, R A; Talton, D A; Walsh, E S

    1988-01-01

    The use of fully interactive 3-D workstations with true real-time performance will become increasingly common as technology matures and economical commercial systems become available. This paper provides a comprehensive introduction to high speed approaches to the display and manipulation of 3-D medical objects obtained from tomographic data acquisition systems such as CT, MR, and PET. A variety of techniques are outlined including the use of software on conventional minicomputers, hardware assist devices such as array processors and programmable frame buffers, and special purpose computer architecture for dedicated high performance systems. While both algorithms and architectures are addressed, the major theme centers around the utilization of hardware-based approaches including parallel processors for the implementation of true real-time systems.

  15. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    NASA Astrophysics Data System (ADS)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  16. Generation of and Retraction from Cross-Linguistically Motivated Structures in Bilingual First Language Acquisition.

    ERIC Educational Resources Information Center

    Dopke, Susanne

    2000-01-01

    Focuses on unusual developmental structures during the simultaneous acquisition of German and English in early childhood, which were evident parallel to a majority of target structures. Explains the cognitive motivation for unusual acquisition structures as well as the eventual retraction from them. (Author/VWL)

  17. MRI artifact reduction and quality improvement in the upper abdomen with PROPELLER and prospective acquisition correction (PACE) technique.

    PubMed

    Hirokawa, Yuusuke; Isoda, Hiroyoshi; Maetani, Yoji S; Arizono, Shigeki; Shimada, Kotaro; Togashi, Kaori

    2008-10-01

    The purpose of this study was to evaluate the effectiveness of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER [BLADE in the MR systems from Siemens Medical Solutions]) with a respiratory compensation technique for motion correction, image noise reduction, improved sharpness of liver edge, and image quality of the upper abdomen. Twenty healthy adult volunteers with a mean age of 28 years (age range, 23-42 years) underwent upper abdominal MRI with a 1.5-T scanner. For each subject, fat-saturated T2-weighted turbo spin-echo (TSE) sequences with respiratory compensation (prospective acquisition correction [PACE]) were performed with and without the BLADE technique. Ghosting artifact, artifacts except ghosting artifact such as respiratory motion and bowel movement, sharpness of liver edge, image noise, and overall image quality were evaluated visually by three radiologists using a 5-point scale for qualitative analysis. The Wilcoxon's signed rank test was used to determine whether a significant difference existed between images with and without BLADE. A p value less than 0.05 was considered to be statistically significant. In the BLADE images, image artifacts, sharpness of liver edge, image noise, and overall image quality were significantly improved (p < 0.001). With the BLADE technique, T2-weighted TSE images of the upper abdomen could provide reduced image artifacts including ghosting artifact and image noise and provide better image quality.

  18. A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy.

    PubMed

    Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George

    2018-05-01

    Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  19. 1H Spectroscopic Imaging of Human Brain at 3T: Comparison of Fast 3D-MRSI Techniques

    PubMed Central

    Zierhut, Matthew L.; Ozturk-Isik, Esin; Chen, Albert P.; Park, Ilwoo; Vigneron, Daniel B.; Nelson, Sarah J.

    2011-01-01

    Purpose To investigate the signal-to-noise-ratio (SNR) and data quality of time-reduced 1H 3D-MRSI techniques in the human brain at 3T. Materials and Methods Techniques that were investigated included ellipsoidal k-space sampling, parallel imaging, and EPSI. The SNR values for NAA, Cho, Cre, and lactate or lipid peaks were compared after correcting for effective spatial resolution and acquisition time in a phantom and in the brains of human volunteers. Other factors considered were linewidths, metabolite ratios, partial volume effects, and subcutaneous lipid contamination. Results In volunteers, the median normalized SNR for parallel imaging data decreased by 34–42%, but could be significantly improved using regularization. The normalized signal to noise loss in flyback EPSI data was 11–18%. The effective spatial resolutions of the traditional, ellipsoidal, SENSE, and EPSI data were 1.02, 2.43, 1.03, and 1.01cm3, respectively. As expected, lipid contamination was variable between subjects but was highest for the SENSE data. Patient data obtained using the flyback EPSI method were of excellent quality. Conclusions Data from all 1H 3D-MRSI techniques were qualitatively acceptable, based upon SNR, linewidths, and metabolite ratios. The larger FOV obtained with the EPSI methods showed negligible lipid aliasing with acceptable SNR values in less than 9.5 minutes without compromising the PSF. PMID:19711396

  20. 3D GRASE PROPELLER: improved image acquisition technique for arterial spin labeling perfusion imaging.

    PubMed

    Tan, Huan; Hoge, W Scott; Hamilton, Craig A; Günther, Matthias; Kraft, Robert A

    2011-07-01

    Arterial spin labeling is a noninvasive technique that can quantitatively measure cerebral blood flow. While traditionally arterial spin labeling employs 2D echo planar imaging or spiral acquisition trajectories, single-shot 3D gradient echo and spin echo (GRASE) is gaining popularity in arterial spin labeling due to inherent signal-to-noise ratio advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T(2) decay. A novel technique combining 3D GRASE and a periodically rotated overlapping parallel lines with enhanced reconstruction trajectory (PROPELLER) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3 × 3 × 5 mm(3) nominal voxel size with pulsed arterial spin labeling preparation sequence. Data from five healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in cerebral blood flow quantification with 3D gradient echo and spin echo, 3D GRASE PROPELLER demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. Copyright © 2011 Wiley-Liss, Inc.

  1. Magnetic resonance for laryngeal cancer.

    PubMed

    Maroldi, Roberto; Ravanelli, Marco; Farina, Davide

    2014-04-01

    This review summarizes the most recent experiences on the integration of magnetic resonance in assessing the local extent of laryngeal cancer and detecting submucosal recurrences. Advances in magnetic resonance have been characterized by the development of technical solutions that shorten the acquisition time, thereby reducing motion artifacts, and increase the spatial resolution. Phased-array surface coils, directly applied to the neck, enable the use of parallel-imaging techniques, which greatly reduce the acquisition time, and amplify the signal intensity, being closer to the larynx. One of the most important drawbacks of this technique is the small field-of-view, restricting the imaged area to the larynx. Furthermore, diffusion-weighted imaging (DWI) has increased the set of magnetic resonance sequences. Differently from computed tomography (CT), which has only two variables (precontrast/postcontrast), magnetic resonance is based on a multiparameter analysis (T2-weighting and T1-weighting, DWI, and postcontrast acquisition). This multiparameter approach amplifies the contrast resolution. It has, also, permitted to differentiate scar tissue (after laser resection) from submucosal recurrent disease. In addition, DWI sequences have the potential of a more precise discrimination of peritumoral edema from neoplastic tissue, which may lead to improve the assessment of paraglottic space invasion. Magnetic resonance of the larynx is technically challenging. The use of surface coils and motion-reducing techniques is critical to achieve adequate image quality. The intrinsic high-contrast resolution is further increased by the integration of information from different sequences. When CT has not been conclusive, magnetic resonance is indicated in the pretreatment local assessment and in the suspicion of submucosal recurrence.

  2. Concentric Rings K-Space Trajectory for Hyperpolarized 13C MR Spectroscopic Imaging

    PubMed Central

    Jiang, Wenwen; Lustig, Michael; Larson, Peder E.Z.

    2014-01-01

    Purpose To develop a robust and rapid imaging technique for hyperpolarized 13C MR Spectroscopic Imaging (MRSI) and investigate its performance. Methods A concentric rings readout trajectory with constant angular velocity is proposed for hyperpolarized 13C spectroscopic imaging and its properties are analyzed. Quantitative analyses of design tradeoffs are presented for several imaging scenarios. The first application of concentric rings on 13C phantoms and in vivo animal hyperpolarized 13C MRSI studies were performed to demonstrate the feasibility of the proposed method. Finally, a parallel imaging accelerated concentric rings study is presented. Results The concentric rings MRSI trajectory has the advantages of acquisition timesaving compared to echo-planar spectroscopic imaging (EPSI). It provides sufficient spectral bandwidth with relatively high SNR efficiency compared to EPSI and spiral techniques. Phantom and in vivo animal studies showed good image quality with half the scan time and reduced pulsatile flow artifacts compared to EPSI. Parallel imaging accelerated concentric rings showed advantages over Cartesian sampling in g-factor simulations and demonstrated aliasing-free image quality in a hyperpolarized 13C in vivo study. Conclusion The concentric rings trajectory is a robust and rapid imaging technique that fits very well with the speed, bandwidth, and resolution requirements of hyperpolarized 13C MRSI. PMID:25533653

  3. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  4. The British Geological Survey and the petroleum industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesher, J.A.

    1995-08-01

    The British Geological Survey is the UK`s national centre for earth science information with a parallel remit to operate internationally. The Survey`s work covers the full geoscience spectrum in energy, mineral and groundwater resources and associated implications for land use, geological hazards and environmental impact. Much of the work is conducted in collaboration with industry and academia, including joint funding opportunities. Activities relating directly to hydrocarbons include basin analysis, offshore geoscience mapping, hazard assessment, fracture characterization, biostratigraphy, sedimentology, seismology, geomagnetism and frontier data acquisition techniques, offshore. The BGS poster presentation illustrates the value of the collaborative approach through consortia supportmore » for regional offshore surveys, geotechnical hazard assessments and state-of-the-art R & D into multicomponent seismic imaging techniques, among others.« less

  5. Hepatic lesions: improved image quality and detection with the periodically rotated overlapping parallel lines with enhanced reconstruction technique--evaluation of SPIO-enhanced T2-weighted MR images.

    PubMed

    Hirokawa, Yuusuke; Isoda, Hiroyoshi; Maetani, Yoji S; Arizono, Shigeki; Shimada, Kotaro; Okada, Tomohisa; Shibata, Toshiya; Togashi, Kaori

    2009-05-01

    To evaluate the effectiveness of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) technique for superparamagnetic iron oxide (SPIO)-enhanced T2-weighted magnetic resonance (MR) imaging with respiratory compensation with the prospective acquisition correction (PACE) technique in the detection of hepatic lesions. The institutional human research committee approved this prospective study, and all patients provided written informed consent. Eighty-one patients (mean age, 58 years) underwent hepatic 1.5-T MR imaging. Fat-saturated T2-weighted turbo spin-echo images were acquired with the PACE technique and with and without the PROPELLER method after administration of SPIO. Images were qualitatively evaluated for image artifacts, depiction of liver edge and intrahepatic vessels, overall image quality, and presence of lesions. Three radiologists independently assessed these characteristics with a five-point confidence scale. Diagnostic performance was assessed with receiver operating characteristic (ROC) curve analysis. Quantitative analysis was conducted by measuring the liver signal-to-noise ratio (SNR) and the lesion-to-liver contrast-to-noise ratio (CNR). The Wilcoxon signed rank test and two-tailed Student t test were used, and P < .05 indicated a significant difference. MR imaging with the PROPELLER and PACE techniques resulted in significantly improved image quality, higher sensitivity, and greater area under the ROC curve for hepatic lesion detection than did MR imaging with the PACE technique alone (P < .001). The mean liver SNR and the lesion-to-liver CNR were higher with the PROPELLER technique than without it (P < .001). T2-weighted MR imaging with the PROPELLER and PACE technique and SPIO enhancement is a promising method with which to improve the detection of hepatic lesions. (c) RSNA, 2009.

  6. Note: Fully integrated time-to-amplitude converter in Si-Ge technology.

    PubMed

    Crotti, M; Rech, I; Ghioni, M

    2010-10-01

    Over the past years an always growing interest has arisen about the measurement technique of time-correlated single photon counting TCSPC), since it allows the analysis of extremely fast and weak light waveforms with a picoseconds resolution. Consequently, many applications exploiting TCSPC have been developed in several fields such as medicine and chemistry. Moreover, the development of multianode PMT and of single photon avalanche diode arrays led to the realization of acquisition systems with several parallel channels to employ the TCSPC technique in even more applications. Since TCSPC basically consists of the measurement of the arrival time of a photon, the most important part of an acquisition chain is the time measurement block, which must have high resolution and low differential nonlinearity, and in order to realize multidimensional systems, it has to be integrated to reduce both cost and area. In this paper we present a fully integrated time-to-amplitude converter, built in 0.35 μm Si-Ge technology, characterized by a good time resolution (60 ps), low differential nonlinearity (better than 3% peak to peak), high counting rate (16 MHz), low and constant power dissipation (40 mW), and low area occupation (1.38×1.28 mm(2)).

  7. 3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆

    PubMed Central

    Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.

    2010-01-01

    Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422

  8. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  9. Radiation dose reduction using a neck detection algorithm for single spiral brain and cervical spine CT acquisition in the trauma setting.

    PubMed

    Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin

    2013-12-01

    Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.

  10. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Three-dimensional through-time radial GRAPPA for renal MR angiography.

    PubMed

    Wright, Katherine L; Lee, Gregory R; Ehses, Philipp; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-10-01

    To achieve high temporal and spatial resolution for contrast-enhanced time-resolved MR angiography exams (trMRAs), fast imaging techniques such as non-Cartesian parallel imaging must be used. In this study, the three-dimensional (3D) through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) method is used to reconstruct highly accelerated stack-of-stars data for time-resolved renal MRAs. Through-time radial GRAPPA has been recently introduced as a method for non-Cartesian GRAPPA weight calibration, and a similar concept can also be used in 3D acquisitions. By combining different sources of calibration information, acquisition time can be reduced. Here, different GRAPPA weight calibration schemes are explored in simulation, and the results are applied to reconstruct undersampled stack-of-stars data. Simulations demonstrate that an accurate and efficient approach to 3D calibration is to combine a small number of central partitions with as many temporal repetitions as exam time permits. These findings were used to reconstruct renal trMRA data with an in-plane acceleration factor as high as 12.6 with respect to the Nyquist sampling criterion, where the lowest root mean squared error value of 16.4% was achieved when using a calibration scheme with 8 partitions, 16 repetitions, and a 4 projection × 8 read point segment size. 3D through-time radial GRAPPA can be used to successfully reconstruct highly accelerated non-Cartesian data. By using in-plane radial undersampling, a trMRA can be acquired with a temporal footprint less than 4s/frame with a spatial resolution of approximately 1.5 mm × 1.5 mm × 3 mm. © 2014 Wiley Periodicals, Inc.

  12. Electronic hardware design of electrical capacitance tomography systems.

    PubMed

    Saied, I; Meribout, M

    2016-06-28

    Electrical tomography techniques for process imaging are very prominent for industrial applications, such as the oil and gas industry and chemical refineries, owing to their ability to provide the flow regime of a flowing fluid within a relatively high throughput. Among the various techniques, electrical capacitance tomography (ECT) is gaining popularity due to its non-invasive nature and its capability to differentiate between different phases based on their permittivity distribution. In recent years, several hardware designs have been provided for ECT systems that have improved its resolution of measurements to be around attofarads (aF, 10(-18) F), or the number of channels, that is required to be large for some applications that require a significant amount of data. In terms of image acquisition time, some recent systems could achieve a throughput of a few hundred frames per second, while data processing time could be achieved in only a few milliseconds per frame. This paper outlines the concept and main features of the most recent front-end and back-end electronic circuits dedicated for ECT systems. In this paper, multiple-excitation capacitance polling, a front-end electronic technique, shows promising results for ECT systems to acquire fast data acquisition speeds. A highly parallel field-programmable gate array (FPGA) based architecture for a fast reconstruction algorithm is also described. This article is part of the themed issue 'Supersensing through industrial process tomography'. © 2016 The Author(s).

  13. Advanced imaging techniques for the study of plant growth and development.

    PubMed

    Sozzani, Rosangela; Busch, Wolfgang; Spalding, Edgar P; Benfey, Philip N

    2014-05-01

    A variety of imaging methodologies are being used to collect data for quantitative studies of plant growth and development from living plants. Multi-level data, from macroscopic to molecular, and from weeks to seconds, can be acquired. Furthermore, advances in parallelized and automated image acquisition enable the throughput to capture images from large populations of plants under specific growth conditions. Image-processing capabilities allow for 3D or 4D reconstruction of image data and automated quantification of biological features. These advances facilitate the integration of imaging data with genome-wide molecular data to enable systems-level modeling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  15. A review of snapshot multidimensional optical imaging: measuring photon tags in parallel

    PubMed Central

    Gao, Liang; Wang, Lihong V.

    2015-01-01

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition—also dubbed snapshot imaging—has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications. PMID:27134340

  16. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  17. Optimised Combined Angular and Energy Dispersive Diffraction at the PSICHE Beam Line of the SOLEIL Synchrotron for Fast, High Q-range Structure Determination at High Pressure and Temperature.

    NASA Astrophysics Data System (ADS)

    King, A.; Guignot, N.; Boulard, E.; Deslandes, J. P.; Clark, A. N.; Morard, G.; Itié, J. P.

    2017-12-01

    Synchrotron diffraction is an ideal technique for investigating materials at high pressure and temperature, because the penetrating nature of high-energy X-rays allows measurements to be made inside pressure cells or sample environments. Wang et al. described the CAESAR acquisition strategy, in which energy and angular dispersive techniques are combined to produce an instrument particularly suitable for quantitative measurements from samples inside high-pressure apparati [1]. The PSICHE beam line of the SOLEIL Synchrotron is equipped with such a CAESAR system. Uniquely, this system allows energy dispersive diffraction spectra to be acquired at scattering angles between -5 and +30 degrees two theta, while maintaining a sphere of confusion at the measurement position in the order of 10 microns. The slits used to define the scattering angle act as Soller slits and select the diffracted volume, separating the sample from its environment. By developing an optimised acquisition strategy we are able to obtain data covering a very wide Q range (to 160nm-1 or more), while minimising the total acquisition time (one hour per complete acquisition). In addition, the 2D nature (angle and energy) of the acquired dataset enables the effective incident spectrum to be efficiently determined with no addition measurements, in order to normalise the acquired data. The resulting profile of scattered intensity as a function of Q is suitable for Fourier transform analysis of liquid or amorphous structures. PSICHE is a multi technique beam line, with a part of the beam time dedicated to parallel beam absorption and phase contrast radiography and tomography [2]. Examples will be given to show how these techniques can be combined with diffraction techniques to greatly enrich studies of materials at extreme conditions. [1] Wang, Y., Uchida, T., Von Dreele, R., Rivers, M. L., Nishiyama, N., Funakoshi, K., Nozawa, A., and Keneko, H., J. Appl. Crystallogr. 37, 947 (2004). [2] King, A., Guignot, N., Zerbino, P., Boulard, E., Desjardins, K., Bourdessoule, M., Leclerq, N., Le, S., Renaud, G., Cerato, M., Bornert, M., Lenoir, N., Delzon, S., Perrillat, J.-P., Legodec, Y., Itié, J.-P. Rev. Sci. Instrum. 87, 093704 (2016).

  18. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    PubMed

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  19. Self-calibrated correlation imaging with k-space variant correlation functions.

    PubMed

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. A discrete component low-noise preamplifier readout for a linear (1×16) SiC photodiode array

    NASA Astrophysics Data System (ADS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Federico A.; Waczynski, Augustyn

    2016-09-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1×16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analog signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  1. A Discrete Component Low-Noise Preamplifier Readout for a Linear (1x16) SiC Photodiode Array

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Frederico A.; Waczynski, Augustyn

    2016-01-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1x16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analogue signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  2. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, S.; Labanca, I.; Rech, I.

    2014-10-15

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments.more » However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.« less

  3. New Imaging Strategies Using a Motion-Resistant Liver Sequence in Uncooperative Patients

    PubMed Central

    Kim, Bong Soo; Lee, Kyung Ryeol; Goh, Myeng Ju

    2014-01-01

    MR imaging has unique benefits for evaluating the liver because of its high-resolution capability and ability to permit detailed assessment of anatomic lesions. In uncooperative patients, motion artifacts can impair the image quality and lead to the loss of diagnostic information. In this setting, the recent advances in motion-resistant liver MR techniques, including faster imaging protocols (e.g., dual-echo magnetization-prepared rapid-acquisition gradient echo (MP-RAGE), view-sharing technique), the data under-sampling (e.g., gradient recalled echo (GRE) with controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA), single-shot echo-train spin-echo (SS-ETSE)), and motion-artifact minimization method (e.g., radial GRE with/without k-space-weighted image contrast (KWIC)), can provide consistent, artifact-free images with adequate image quality and can lead to promising diagnostic performance. Understanding of the different motion-resistant options allows radiologists to adopt the most appropriate technique for their clinical practice and thereby significantly improve patient care. PMID:25243115

  4. One, two, three, four, nothing more: an investigation of the conceptual sources of the verbal counting principles.

    PubMed

    Le Corre, Mathieu; Carey, Susan

    2007-11-01

    Since the publication of [Gelman, R., & Gallistel, C. R. (1978). The child's understanding of number. Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present experiments explore proposals according to which the verbal counting principles are acquired by mapping numerals in the count list onto systems of numerical representation for which there is evidence in infancy, namely, analog magnitudes, parallel individuation, and set-based quantification. By asking 3- and 4-year-olds to estimate the number of elements in sets without counting, we investigate whether the numerals that are assigned cardinal meaning as part of the acquisition process display the signatures of what we call "enriched parallel individuation" (which combines properties of parallel individuation and of set-based quantification) or analog magnitudes. Two experiments demonstrate that while "one" to "four" are mapped onto core representations of small sets prior to the acquisition of the counting principles, numerals beyond "four" are only mapped onto analog magnitudes about six months after the acquisition of the counting principles. Moreover, we show that children's numerical estimates of sets from 1 to 4 elements fail to show the signature of numeral use based on analog magnitudes - namely, scalar variability. We conclude that, while representations of small sets provided by parallel individuation, enriched by the resources of set-based quantification are recruited in the acquisition process to provide the first numerical meanings for "one" to "four", analog magnitudes play no role in this process.

  5. Capillary array scanner for time-resolved detection and identification of fluorescently labelled DNA fragments.

    PubMed

    Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M

    2000-02-25

    The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.

  6. Abdominal MR imaging in children: motion compensation, sequence optimization, and protocol organization.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Vasanawala, Shreyas S

    2013-05-01

    Familiarity with basic sequence properties and their trade-offs is necessary for radiologists performing abdominal magnetic resonance (MR) imaging. Acquiring diagnostic-quality MR images in the pediatric abdomen is challenging due to motion, inability to breath hold, varying patient size, and artifacts. Motion-compensation techniques (eg, respiratory gating, signal averaging, suppression of signal from moving tissue, swapping phase- and frequency-encoding directions, use of faster sequences with breath holding, parallel imaging, and radial k-space filling) can improve image quality. Each of these techniques is more suitable for use with certain sequences and acquisition planes and in specific situations and age groups. Different T1- and T2-weighted sequences work better in different age groups and with differing acquisition planes and have specific advantages and disadvantages. Dynamic imaging should be performed differently in younger children than in older children. In younger children, the sequence and the timing of dynamic phases need to be adjusted. Different sequences work better in smaller children and in older children because of differing breath-holding ability, breathing patterns, field of view, and use of sedation. Hence, specific protocols should be maintained for younger children and older children. Combining longer-higher-resolution sequences and faster-lower-resolution sequences helps acquire diagnostic-quality images in a reasonable time. © RSNA, 2013.

  7. Development and experimental testing of an optical micro-spectroscopic technique incorporating true line-scan excitation.

    PubMed

    Biener, Gabriel; Stoneman, Michael R; Acbas, Gheorghe; Holz, Jessica D; Orlova, Marianna; Komarova, Liudmila; Kuchin, Sergei; Raicu, Valerică

    2013-12-27

    Multiphoton micro-spectroscopy, employing diffraction optics and electron-multiplying CCD (EMCCD) cameras, is a suitable method for determining protein complex stoichiometry, quaternary structure, and spatial distribution in living cells using Förster resonance energy transfer (FRET) imaging. The method provides highly resolved spectra of molecules or molecular complexes at each image pixel, and it does so on a timescale shorter than that of molecular diffusion, which scrambles the spectral information. Acquisition of an entire spectrally resolved image, however, is slower than that of broad-bandwidth microscopes because it takes longer times to collect the same number of photons at each emission wavelength as in a broad bandwidth. Here, we demonstrate an optical micro-spectroscopic scheme that employs a laser beam shaped into a line to excite in parallel multiple sample voxels. The method presents dramatically increased sensitivity and/or acquisition speed and, at the same time, has excellent spatial and spectral resolution, similar to point-scan configurations. When applied to FRET imaging using an oligomeric FRET construct expressed in living cells and consisting of a FRET acceptor linked to three donors, the technique based on line-shaped excitation provides higher accuracy compared to the point-scan approach, and it reduces artifacts caused by photobleaching and other undesired photophysical effects.

  8. 75 FR 26255 - Change in Bank Control Notices; Acquisition of Shares of Bank or Bank Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-11

    ... Street, San Francisco, California 94105-1579: 1. Thomas H. Lee Equity Fund VI, L.P.; Thomas H. Lee Parallel Fund VI, L.P.; Thomas H. Lee Parallel (DT) Fund VI, L.P.; and THL Sterling Equity Investors, L.P...

  9. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  11. The performance of the ZEUS central tracking detector z-by-timing electronics in a transputer based data acquisition system

    NASA Astrophysics Data System (ADS)

    Foster, B.; Heath, G. P.; Llewellyn, T. J.; Gingrich, D. M.; Harnew, N.; Hallam-Baker, P. M.; Khatri, T.; McArthur, I. C.; Morawitz, P.; Nash, J.; Shield, P. D.; Topp-Jorgensen, S.; Wilson, F. F.; Allen, D. B.; Carter, R. C.; Jeffs, M. D.; Morrissey, M. C.; Quinton, S. P. H.; Lane, J. B.; Postranecky, M.

    1993-05-01

    The Central Tracking Detector of the ZEUS experiment employs a time difference technique to measure the z coordinate of each hit. The method provides fast, three-dimensional space point measurements which are used as input to all levels of the ZEUS trigger. Such a tracking trigger is essential in order to discriminate against events with vertices lying outside the nominal electron-proton interaction region. Since the beam crossing interval of the HERA collider is 96 ns, all data must be pipelined through the front-end readout electronics. Subsequent data aquisition employs a novel technique which utilizes a network of approximately 120 INMOS transputers to process the data in parallel. The z-by-timing method and its data aquisition have been employed successfully in recording and reconstructing tracks from electron-proton interactions in ZEUS.

  12. Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation

    NASA Technical Reports Server (NTRS)

    Leachman, Jonathan

    2010-01-01

    A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.

  13. Integration of Spectral Reflectance across the Plumage: Implications for Mating Patterns

    PubMed Central

    Laczi, Miklós; Török, János; Rosivall, Balázs; Hegyi, Gergely

    2011-01-01

    Background In complex sexual signaling systems such as plumage color, developmental or genetic links may occur among seemingly distinct traits. However, the interrelations of such traits and the functional significance of their integration rarely have been examined. Methodology/Principal Findings We investigated the parallel variation of two reflectance descriptors (brightness and UV chroma) across depigmented and melanized plumage areas of collared flycatchers (Ficedula albicollis), and the possible role of integrated color signals in mate acquisition. We found moderate integration in brightness and UV chroma across the plumage, with similar correlation structures in the two sexes despite the strong sexual dichromatism. Patterns of parallel color change across the plumage were largely unrelated to ornamental white patch sizes, but they all showed strong assortative mating between the sexes. Comparing different types of assortative mating patterns for individual spectral variables suggested a distinct role for plumage-level color axes in mate acquisition. Conclusions/Significance Our results indicate that the plumage-level, parallel variation of coloration might play a role in mate acquisition. This study underlines the importance of considering potential developmental and functional integration among apparently different ornaments in studies of sexual selection. PMID:21853088

  14. Recycling isoelectric focusing with computer controlled data acquisition system. [for high resolution electrophoretic separation and purification of biomolecules

    NASA Technical Reports Server (NTRS)

    Egen, N. B.; Twitty, G. E.; Bier, M.

    1979-01-01

    Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.

  15. Parallel transmit beamforming using orthogonal frequency division multiplexing applied to harmonic imaging--a feasibility study.

    PubMed

    Demi, Libertario; Verweij, Martin D; Van Dongen, Koen W A

    2012-11-01

    Real-time 2-D or 3-D ultrasound imaging systems are currently used for medical diagnosis. To achieve the required data acquisition rate, these systems rely on parallel beamforming, i.e., a single wide-angled beam is used for transmission and several narrow parallel beams are used for reception. When applied to harmonic imaging, the demand for high-amplitude pressure wave fields, necessary to generate the harmonic components, conflicts with the use of a wide-angled beam in transmission because this results in a large spatial decay of the acoustic pressure. To enhance the amplitude of the harmonics, it is preferable to do the reverse: transmit several narrow parallel beams and use a wide-angled beam in reception. Here, this concept is investigated to determine whether it can be used for harmonic imaging. The method proposed in this paper relies on orthogonal frequency division multiplexing (OFDM), which is used to create distinctive parallel beams in transmission. To test the proposed method, a numerical study has been performed, in which the transmit, receive, and combined beam profiles generated by a linear array have been simulated for the second-harmonic component. Compared with standard parallel beamforming, application of the proposed technique results in a gain of 12 dB for the main beam and in a reduction of the side lobes. Experimental verification in water has also been performed. Measurements obtained with a single-element emitting transducer and a hydrophone receiver confirm the possibility of exciting a practical ultrasound transducer with multiple Gaussian modulated pulses, each having a different center frequency, and the capability to generate distinguishable second-harmonic components.

  16. Parallel MR Imaging with Accelerations Beyond the Number of Receiver Channels Using Real Image Reconstruction.

    PubMed

    Ji, Jim; Wright, Steven

    2005-01-01

    Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.

  17. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  18. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  19. High-resolution magnetic resonance angiography of the lower extremities with a dedicated 36-element matrix coil at 3 Tesla.

    PubMed

    Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O

    2007-06-01

    Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.

  20. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  1. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  2. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  3. Parallel computational fluid dynamics '91; Conference Proceedings, Stuttgart, Germany, Jun. 10-12, 1991

    NASA Technical Reports Server (NTRS)

    Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)

    1992-01-01

    A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.

  4. Fast data acquisition with the CDF event builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinervo, P.K.; Ragan, K.J.; Booth, A.W.

    1989-02-01

    The CDF (Collider Detector at Fermilab) Event Builder is an intelligent Fastbus device that performs parallel read out of a set of Fastbus slaves on multiple cable segments, formats the data, and writes the reformatted data to a Fastbus slave module. The authors review the properties of this device, and summarize its performance in the CDF data acquisition system.

  5. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  6. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  7. A novel anthropomorphic flow phantom for the quantitative evaluation of prostate DCE-MRI acquisition techniques

    NASA Astrophysics Data System (ADS)

    Knight, Silvin P.; Browne, Jacinta E.; Meaney, James F.; Smith, David S.; Fagan, Andrew J.

    2016-10-01

    A novel anthropomorphic flow phantom device has been developed, which can be used for quantitatively assessing the ability of magnetic resonance imaging (MRI) scanners to accurately measure signal/concentration time-intensity curves (CTCs) associated with dynamic contrast-enhanced (DCE) MRI. Modelling of the complex pharmacokinetics of contrast agents as they perfuse through the tumour capillary network has shown great promise for cancer diagnosis and therapy monitoring. However, clinical adoption has been hindered by methodological problems, resulting in a lack of consensus regarding the most appropriate acquisition and modelling methodology to use and a consequent wide discrepancy in published data. A heretofore overlooked source of such discrepancy may arise from measurement errors of tumour CTCs deriving from the imaging pulse sequence itself, while the effects on the fidelity of CTC measurement of using rapidly-accelerated sequences such as parallel imaging and compressed sensing remain unknown. The present work aimed to investigate these features by developing a test device in which ‘ground truth’ CTCs were generated and presented to the MRI scanner for measurement, thereby allowing for an assessment of the DCE-MRI protocol to accurately measure this curve shape. The device comprised a four-pump flow system wherein CTCs derived from prior patient prostate data were produced in measurement chambers placed within the imaged volume. The ground truth was determined as the mean of repeat measurements using an MRI-independent, custom-built optical imaging system. In DCE-MRI experiments, significant discrepancies between the ground truth and measured CTCs were found for both tumorous and healthy tissue-mimicking curve shapes. Pharmacokinetic modelling revealed errors in measured K trans, v e and k ep values of up to 42%, 31%, and 50% respectively, following a simple variation of the parallel imaging factor and number of signal averages in the acquisition protocol. The device allows for the quantitative assessment and standardisation of DCE-MRI protocols (both existing and emerging).

  8. Simultaneous multi-slice combined with PROPELLER.

    PubMed

    Norbeck, Ola; Avventi, Enrico; Engström, Mathias; Rydén, Henric; Skare, Stefan

    2018-08-01

    Simultaneous multi-slice (SMS) imaging is an advantageous method for accelerating MRI scans, allowing reduced scan time, increased slice coverage, or high temporal resolution with limited image quality penalties. In this work we combine the advantages of SMS acceleration with the motion correction and artifact reduction capabilities of the PROPELLER technique. A PROPELLER sequence was developed with support for CAIPIRINHA and phase optimized multiband radio frequency pulses. To minimize the time spent on acquiring calibration data, both in-plane-generalized autocalibrating partial parallel acquisition (GRAPPA) and slice-GRAPPA weights for all PROPELLER blade angles were calibrated on a single fully sampled PROPELLER blade volume. Therefore, the proposed acquisition included a single fully sampled blade volume, with the remaining blades accelerated in both the phase and slice encoding directions without additional auto calibrating signal lines. Comparison to 3D RARE was performed as well as demonstration of 3D motion correction performance on the SMS PROPELLER data. We show that PROPELLER acquisitions can be efficiently accelerated with SMS using a short embedded calibration. The potential in combining these two techniques was demonstrated with a high quality 1.0 × 1.0 × 1.0 mm 3 resolution T 2 -weighted volume, free from banding artifacts, and capable of 3D retrospective motion correction, with higher effective resolution compared to 3D RARE. With the combination of SMS acceleration and PROPELLER imaging, thin-sliced reformattable T 2 -weighted image volumes with 3D retrospective motion correction capabilities can be rapidly acquired with low sensitivity to flow and head motion. Magn Reson Med 80:496-506, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  10. Massively parallel sensing of trace molecules and their isotopologues with broadband subharmonic mid-infrared frequency combs

    NASA Astrophysics Data System (ADS)

    Muraviev, A. V.; Smolski, V. O.; Loparo, Z. E.; Vodopyanov, K. L.

    2018-04-01

    Mid-infrared spectroscopy offers supreme sensitivity for the detection of trace gases, solids and liquids based on tell-tale vibrational bands specific to this spectral region. Here, we present a new platform for mid-infrared dual-comb Fourier-transform spectroscopy based on a pair of ultra-broadband subharmonic optical parametric oscillators pumped by two phase-locked thulium-fibre combs. Our system provides fast (7 ms for a single interferogram), moving-parts-free, simultaneous acquisition of 350,000 spectral data points, spaced by a 115 MHz intermodal interval over the 3.1-5.5 µm spectral range. Parallel detection of 22 trace molecular species in a gas mixture, including isotopologues containing isotopes such as 13C, 18O, 17O, 15N, 34S, 33S and deuterium, with part-per-billion sensitivity and sub-Doppler resolution is demonstrated. The technique also features absolute optical frequency referencing to an atomic clock, a high degree of mutual coherence between the two mid-infrared combs with a relative comb-tooth linewidth of 25 mHz, coherent averaging and feasibility for kilohertz-scale spectral resolution.

  11. Development of a high-performance multichannel system for time-correlated single photon counting

    NASA Astrophysics Data System (ADS)

    Peronio, P.; Cominelli, A.; Acconcia, G.; Rech, I.; Ghioni, M.

    2017-05-01

    Time-Correlated Single Photon Counting (TCSPC) is one of the most effective techniques for measuring weak and fast optical signals. It outperforms traditional "analog" techniques due to its high sensitivity along with high temporal resolution. Despite those significant advantages, a main drawback still exists, which is related to the long acquisition time needed to perform a measurement. In past years many TCSPC systems have been developed with higher and higher number of channels, aimed to dealing with that limitation. Nevertheless, modern systems suffer from a strong trade-off between parallelism level and performance: the higher the number of channels the poorer the performance. In this work we present the design of a 32x32 TCSPC system meant for overtaking the existing trade-off. To this aim different technologies has been employed, to get the best performance both from detectors and sensing circuits. The exploitation of different technologies will be enabled by Through Silicon Vias (TSVs) which will be investigated as a possible solution for connecting the detectors to the sensing circuits. When dealing with a high number of channels, the count rate is inevitably set by the affordable throughput to the external PC. We targeted a throughput of 10Gb/s, which is beyond the state of the art, and designed the number of TCSPC channels accordingly. A dynamic-routing logic will connect the detectors to the lower number of acquisition chains.

  12. Rotating single-shot acquisition (RoSA) with composite reconstruction for fast high-resolution diffusion imaging.

    PubMed

    Wen, Qiuting; Kodiweera, Chandana; Dale, Brian M; Shivraman, Giri; Wu, Yu-Chien

    2018-01-01

    To accelerate high-resolution diffusion imaging, rotating single-shot acquisition (RoSA) with composite reconstruction is proposed. Acceleration was achieved by acquiring only one rotating single-shot blade per diffusion direction, and high-resolution diffusion-weighted (DW) images were reconstructed by using similarities of neighboring DW images. A parallel imaging technique was implemented in RoSA to further improve the image quality and acquisition speed. RoSA performance was evaluated by simulation and human experiments. A brain tensor phantom was developed to determine an optimal blade size and rotation angle by considering similarity in DW images, off-resonance effects, and k-space coverage. With the optimal parameters, RoSA MR pulse sequence and reconstruction algorithm were developed to acquire human brain data. For comparison, multishot echo planar imaging (EPI) and conventional single-shot EPI sequences were performed with matched scan time, resolution, field of view, and diffusion directions. The simulation indicated an optimal blade size of 48 × 256 and a 30 ° rotation angle. For 1 × 1 mm 2 in-plane resolution, RoSA was 12 times faster than the multishot acquisition with comparable image quality. With the same acquisition time as SS-EPI, RoSA provided superior image quality and minimum geometric distortion. RoSA offers fast, high-quality, high-resolution diffusion images. The composite image reconstruction is model-free and compatible with various diffusion computation approaches including parametric and nonparametric analyses. Magn Reson Med 79:264-275, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  14. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed

    Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.

  15. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family

    ERIC Educational Resources Information Center

    Cramér-Wolrath, Emelie

    2013-01-01

    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  16. A 24-ch Phased-Array System for Hyperpolarized Helium Gas Parallel MRI to Evaluate Lung Functions.

    PubMed

    Lee, Ray; Johnson, Glyn; Stefanescu, Cornel; Trampel, Robert; McGuinness, Georgeann; Stoeckel, Bernd

    2005-01-01

    Hyperpolarized 3He gas MRI has a serious potential for assessing pulmonary functions. Due to the fact that the non-equilibrium of the gas results in a steady depletion of the signal level over the course of the excitations, the signal-tonoise ratio (SNR) can be independent of the number of the data acquisitions under certain circumstances. This provides a unique opportunity for parallel MRI for gaining both temporal and spatial resolution without reducing SNR. We have built a 24-channel receive / 2-channel transmit phased array system for 3He parallel imaging. Our in vivo experimental results proved that the significant temporal and spatial resolution can be gained at no cost to the SNR. With 3D data acquisition, eight fold (2x4) scan time reduction can be achieved without any aliasing in images. Additionally, a rigid analysis using the low impedance preamplifier for decoupling presented evidence of strong coupling.

  17. Laser Ablation-Aerosol Mass Spectrometry-Chemical Ionization Mass Spectrometry for Ambient Surface Imaging

    DOE PAGES

    Berry, Jennifer L.; Day, Douglas A.; Elseberg, Tim; ...

    2018-02-20

    Mass spectrometry imaging is becoming an increasingly common analytical technique due to its ability to provide spatially resolved chemical information. In this paper, we report a novel imaging approach combining laser ablation with two mass spectrometric techniques, aerosol mass spectrometry and chemical ionization mass spectrometry, separately and in parallel. Both mass spectrometric methods provide the fast response, rapid data acquisition, low detection limits, and high-resolution peak separation desirable for imaging complex samples. Additionally, the two techniques provide complementary information with aerosol mass spectrometry providing near universal detection of all aerosol molecules and chemical ionization mass spectrometry with a heated inletmore » providing molecular-level detail of both gases and aerosols. The two techniques operate with atmospheric pressure interfaces and require no matrix addition for ionization, allowing for samples to be investigated in their native state under ambient pressure conditions. We demonstrate the ability of laser ablation-aerosol mass spectrometry-chemical ionization mass spectrometry (LA-AMS-CIMS) to create 2D images of both standard compounds and complex mixtures. Finally, the results suggest that LA-AMS-CIMS, particularly when combined with advanced data analysis methods, could have broad applications in mass spectrometry imaging applications.« less

  18. Laser Ablation-Aerosol Mass Spectrometry-Chemical Ionization Mass Spectrometry for Ambient Surface Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jennifer L.; Day, Douglas A.; Elseberg, Tim

    Mass spectrometry imaging is becoming an increasingly common analytical technique due to its ability to provide spatially resolved chemical information. In this paper, we report a novel imaging approach combining laser ablation with two mass spectrometric techniques, aerosol mass spectrometry and chemical ionization mass spectrometry, separately and in parallel. Both mass spectrometric methods provide the fast response, rapid data acquisition, low detection limits, and high-resolution peak separation desirable for imaging complex samples. Additionally, the two techniques provide complementary information with aerosol mass spectrometry providing near universal detection of all aerosol molecules and chemical ionization mass spectrometry with a heated inletmore » providing molecular-level detail of both gases and aerosols. The two techniques operate with atmospheric pressure interfaces and require no matrix addition for ionization, allowing for samples to be investigated in their native state under ambient pressure conditions. We demonstrate the ability of laser ablation-aerosol mass spectrometry-chemical ionization mass spectrometry (LA-AMS-CIMS) to create 2D images of both standard compounds and complex mixtures. Finally, the results suggest that LA-AMS-CIMS, particularly when combined with advanced data analysis methods, could have broad applications in mass spectrometry imaging applications.« less

  19. Multimodal full-field optical coherence tomography on biological tissue: toward all optical digital pathology

    NASA Astrophysics Data System (ADS)

    Harms, F.; Dalimier, E.; Vermeulen, P.; Fragola, A.; Boccara, A. C.

    2012-03-01

    Optical Coherence Tomography (OCT) is an efficient technique for in-depth optical biopsy of biological tissues, relying on interferometric selection of ballistic photons. Full-Field Optical Coherence Tomography (FF-OCT) is an alternative approach to Fourier-domain OCT (spectral or swept-source), allowing parallel acquisition of en-face optical sections. Using medium numerical aperture objective, it is possible to reach an isotropic resolution of about 1x1x1 ìm. After stitching a grid of acquired images, FF-OCT gives access to the architecture of the tissue, for both macroscopic and microscopic structures, in a non-invasive process, which makes the technique particularly suitable for applications in pathology. Here we report a multimodal approach to FF-OCT, combining two Full-Field techniques for collecting a backscattered endogeneous OCT image and a fluorescence exogeneous image in parallel. Considering pathological diagnosis of cancer, visualization of cell nuclei is of paramount importance. OCT images, even for the highest resolution, usually fail to identify individual nuclei due to the nature of the optical contrast used. We have built a multimodal optical microscope based on the combination of FF-OCT and Structured Illumination Microscopy (SIM). We used x30 immersion objectives, with a numerical aperture of 1.05, allowing for sub-micron transverse resolution. Fluorescent staining of nuclei was obtained using specific fluorescent dyes such as acridine orange. We present multimodal images of healthy and pathological skin tissue at various scales. This instrumental development paves the way for improvements of standard pathology procedures, as a faster, non sacrificial, operator independent digital optical method compared to frozen sections.

  20. Detecting opportunities for parallel observations on the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Lucks, Michael

    1992-01-01

    The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.

  1. Exercise Science Principles and the Vocal Warm-up: Implications for Singing Voice Pedagogy.

    PubMed

    Hoch, Matthew; Sandage, Mary J

    2018-01-01

    Principles from exercise science literature were applied to singing warm-up pedagogy as a method for examining parallels between athletic and voice training. Analysis of the use of exercise principles in vocal warm-up should illuminate aspects of voice training that may be further developed in the future. A selected canon of standard voice pedagogy texts and well-regarded warm-up methods were evaluated for use of exercise science principles for skill acquisition and fatigue resistance. Exercises were then categorized according to whether they were used for the purpose of skill acquisition (specificity), training up to tasks (overload), or detraining (reversibility). A preliminary review of well-established voice pedagogy programs reveals a strong bias toward the skill acquisition aspects of vocal warm-up, with little commentary on the fatigue management aspects. Further, the small number of vocalises examined that are not skill-acquisition oriented fall into a third "habilitative" category that likewise does not relate to overload but may play a role in offsetting reversibility. Although a systematic pedagogy for skill acquisition has emerged in the literature and practice of voice pedagogy, a parallel pedagogy for fatigue management has yet to be established. Identification of a systematic pedagogy for training up to specific singing genres and development of a singing maintenance program to avoid detraining may help the singer avoid injury. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  3. Effects of the frame acquisition rate on the sensitivity of gastro-oesophageal reflux scintigraphy

    PubMed Central

    Codreanu, I; Chamroonrat, W; Edwards, K

    2013-01-01

    Objective: To compare the sensitivity of gastro-oesophageal reflux (GOR) scintigraphy at 5-s and 60-s frame acquisition rates. Methods: GOR scintigraphy of 50 subjects (1 month–20 years old, mean 42 months) were analysed concurrently using 5-s and 60-s acquisition frames. Reflux episodes were graded as low if activity was detected in the distal half of the oesophagus and high if activity was detected in its upper half or in the oral cavity. For comparison purposes, detected GOR in any number of 5-s frames corresponding to one 60-s frame was counted as one episode. Results: A total of 679 episodes of GOR to the upper oesophagus were counted using a 5-s acquisition technique. Only 183 of such episodes were detected on 60-s acquisition images. To the lower oesophagus, a total of 1749 GOR episodes were detected using a 5-s acquisition technique and only 1045 episodes using 60-s acquisition frames (these also included the high-level GOR on 5-s frames counted as low level on 60-s acquisition frames). 10 patients had high-level GOR episodes that were detected only using a 5-s acquisition technique, leading to a different diagnosis in these patients. No correlation between the number of reflux episodes and the gastric emptying rates was noted. Conclusion: The 5-s frame acquisition technique is more sensitive than the 60-s frame acquisition technique for detecting both high- and low-level GOR. Advances in knowledge: Brief GOR episodes with a relatively low number of radioactive counts are frequently indistinguishable from intense background activity on 60-s acquisition frames. PMID:23520226

  4. Testing Hypotheses about Second Language Acquisition: The Copula and Negative in Three Subjects. Working Papers on Bilingualism, No. 3.

    ERIC Educational Resources Information Center

    Cancino, Herlinda; And Others

    Three hypotheses are examined in relation to English copula and negative utterances produced by three native Spanish speakers. The hypotheses are interference, interlanguage and L1=L2, which states that acquisition of a language by second language learners will parallel acquisiton of the same language by first language learners. The results of the…

  5. Rapid brain MRI acquisition techniques at ultra-high fields

    PubMed Central

    Setsompop, Kawin; Feinberg, David A.; Polimeni, Jonathan R.

    2017-01-01

    Ultra-high-field MRI provides large increases in signal-to-noise ratio as well as enhancement of several contrast mechanisms in both structural and functional imaging. Combined, these gains result in a substantial boost in contrast-to-noise ratio that can be exploited for higher spatial resolution imaging to extract finer-scale information about the brain. With increased spatial resolution, however, is a concurrent increased image encoding burden that can cause unacceptably long scan times for structural imaging and slow temporal sampling of the hemodynamic response in functional MRI—particularly when whole-brain imaging is desired. To address this issue, new directions of imaging technology development—such as the move from conventional 2D slice-by-slice imaging to more efficient Simultaneous MultiSlice (SMS) or MultiBand imaging (which can be viewed as “pseudo-3D” encoding) as well as full 3D imaging—have provided dramatic improvements in acquisition speed. Such imaging paradigms provide higher SNR efficiency as well as improved encoding efficiency. Moreover, SMS and 3D imaging can make better use of coil sensitivity information in multi-channel receiver arrays used for parallel imaging acquisitions through controlled aliasing in multiple spatial directions. This has enabled unprecedented acceleration factors of an order of magnitude or higher in these imaging acquisition schemes, with low image artifact levels and high SNR. Here we review the latest developments of SMS and 3D imaging methods and related technologies at ultra-high field for rapid high-resolution functional and structural imaging of the brain. PMID:26835884

  6. 3.0 Tesla high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) of the pulmonary circulation: initial experience with a 32-channel phased array coil using a high relaxivity contrast agent.

    PubMed

    Nael, Kambiz; Fenchel, Michael; Krishnam, Mayil; Finn, J Paul; Laub, Gerhard; Ruehm, Stefan G

    2007-06-01

    To evaluate the technical feasibility of high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) with highly accelerated parallel acquisition at 3.0 T using a 32-channel phased array coil, and a high relaxivity contrast agent. Ten adult healthy volunteers (5 men, 5 women, aged 21-66 years) underwent high spatial resolution CE-MRA of the pulmonary circulation. Imaging was performed at 3 T using a 32-channel phase array coil. After intravenous injection of 1 mL of gadobenate dimeglumine (Gd-BOPTA) at 1.5 mL/s, a timing bolus was used to measure the transit time from the arm vein to the main pulmonary artery. Subsequently following intravenous injection of 0.1 mmol/kg of Gd-BOPTA at the same rate, isotropic high spatial resolution data sets (1 x 1 x 1 mm3) CE-MRA of the entire pulmonary circulation were acquired using a fast gradient-recalled echo sequence (TR/TE 3/1.2 milliseconds, FA 18 degrees) and highly accelerated parallel acquisition (GRAPPA x 6) during a 20-second breath hold. The presence of artifact, noise, and image quality of the pulmonary arterial segments were evaluated independently by 2 radiologists. Phantom measurements were performed to assess the signal-to-noise ratio (SNR). Statistical analysis of data was performed by using Wilcoxon rank sum test and 2-sample Student t test. The interobserver variability was tested by kappa coefficient. All studies were of diagnostic quality as determined by both observers. The pulmonary arteries were routinely identified up to fifth-order branches, with definition in the diagnostic range and excellent interobserver agreement (kappa = 0.84, 95% confidence interval 0.77-0.90). Phantom measurements showed significantly lower SNR (P < 0.01) using GRAPPA (17.3 +/- 18.8) compared with measurements without parallel acquisition (58 +/- 49.4). The described 3 T CE-MRA protocol in addition to high T1 relaxivity of Gd-BOPTA provides sufficient SNR to support highly accelerated parallel acquisition (GRAPPA x 6), resulting in acquisition of isotopic (1 x 1 x 1 mm3) voxels over the entire pulmonary circulation in 20 seconds.

  7. 76 FR 38178 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ..., New York, New York 10045-0001: 1. Thomas H. Lee (Alternative) Fund VI, L.P., Thomas H. Lee (Alternative) Parallel Fund VI, L.P., Thomas H. Lee (Alternative) Parallel (DT) Fund VI, L.P., THL FBC Equity Investors, L.P., THL Advisors (Alternative) VI, L.P., Thomas H. Lee (Alternative) VI, Ltd., THL Managers VI...

  8. Hybrid Parallel-Slant Hole Collimators for SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.; Petrillo, M.

    2004-06-01

    We propose a new collimator geometry, the hybrid parallel-slant (HPS) hole geometry, to improve sensitivity for SPECT imaging with large field of view (LFOV) gamma cameras. A HPS collimator has one segment with parallel holes and one or more segments with slant holes. The collimator can be mounted on a conventional SPECT LFOV system that uses parallel-beam collimators, and no additional detector or collimator motion is required for data acquisition. The parallel segment of the collimator allows for the acquisition of a complete data set of the organs-of-interest and the slant segments provide additional data. In this work, simulation studies of an MCAT phantom were performed with a HPS collimator with one slant segment. The slant direction points from patient head to patient feet with a slant angle of 30/spl deg/. We simulated 64 projection views over 180/spl deg/ with the modeling of nonuniform attenuation effect, and then reconstructed images using an MLEM algorithm that incorporated the hybrid geometry. It was shown that sensitivity to the cardiac region of the phantom was increased by approximately 50% when using the HPS collimator compared with a parallel-hole collimator. No visible artifacts were observed in the myocardium and the signal-to-noise ratio (SNR) of the myocardium walls was improved. Compared with collimators with other geometries, using a HPS collimator has the following advantages: (a) significant sensitivity increase; (b) a complete data set obtained from the parallel segment that allows for artifact-free image reconstruction; and (c) no additional collimator or detector motion. This work demonstrates the potential value of hybrid geometry in collimator design for LFOV SPECT imaging.

  9. High Spatiotemporal Resolution Dynamic Contrast-Enhanced MR Enterography in Crohn Disease Terminal Ileitis Using Continuous Golden-Angle Radial Sampling, Compressed Sensing, and Parallel Imaging.

    PubMed

    Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh

    2015-06-01

    The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.

  10. Parallel multispot smFRET analysis using an 8-pixel SPAD array

    NASA Astrophysics Data System (ADS)

    Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.

    2012-02-01

    Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.

  11. Multi-echo acquisition

    PubMed Central

    Posse, Stefan

    2011-01-01

    The rapid development of fMRI was paralleled early on by the adaptation of MR spectroscopic imaging (MRSI) methods to quantify water relaxation changes during brain activation. This review describes the evolution of multi-echo acquisition from high-speed MRSI to multi-echo EPI and beyond. It highlights milestones in the development of multi-echo acquisition methods, such as the discovery of considerable gains in fMRI sensitivity when combining echo images, advances in quantification of the BOLD effect using analytical biophysical modeling and interleaved multi-region shimming. The review conveys the insight gained from combining fMRI and MRSI methods and concludes with recent trends in ultra-fast fMRI, which will significantly increase temporal resolution of multi-echo acquisition. PMID:22056458

  12. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  13. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  14. A review of 3D first-pass, whole-heart, myocardial perfusion cardiovascular magnetic resonance.

    PubMed

    Fair, Merlin J; Gatehouse, Peter D; DiBella, Edward V R; Firmin, David N

    2015-08-01

    A comprehensive review is undertaken of the methods available for 3D whole-heart first-pass perfusion (FPP) and their application to date, with particular focus on possible acceleration techniques. Following a summary of the parameters typically desired of 3D FPP methods, the review explains the mechanisms of key acceleration techniques and their potential use in FPP for attaining 3D acquisitions. The mechanisms include rapid sequences, non-Cartesian k-space trajectories, reduced k-space acquisitions, parallel imaging reconstructions and compressed sensing. An attempt is made to explain, rather than simply state, the varying methods with the hope that it will give an appreciation of the different components making up a 3D FPP protocol. Basic estimates demonstrating the required total acceleration factors in typical 3D FPP cases are included, providing context for the extent that each acceleration method can contribute to the required imaging speed, as well as potential limitations in present 3D FPP literature. Although many 3D FPP methods are too early in development for the type of clinical trials required to show any clear benefit over current 2D FPP methods, the review includes the small but growing quantity of clinical research work already using 3D FPP, alongside the more technical work. Broader challenges concerning FPP such as quantitative analysis are not covered, but challenges with particular impact on 3D FPP methods, particularly with regards to motion effects, are discussed along with anticipated future work in the field.

  15. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  16. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  17. Diagnostic value of the fluoroscopic triggering 3D LAVA technique for primary liver cancer.

    PubMed

    Shen, Xiao-Yong; Chai, Chun-Hua; Xiao, Wen-Bo; Wang, Qi-Dong

    2010-04-01

    Primary liver cancer (PLC) is one of the common malignant tumors. Liver acquisition with acceleration volume acquisition (LAVA), which allows simultaneous dynamic enhancement of the hepatic parenchyma and vasculature imaging, is of great help in the diagnosis of PLC. This study aimed to evaluate application of the fluoroscopic triggering 3D LAVA technique in the imaging of PLC and liver vasculature. The clinical data and imaging findings of 38 adults with PLC (22 men and 16 women; average age 52 years), pathologically confirmed by surgical resection or biopsy, were collected and analyzed. All magnetic resonance images were obtained with a 1.5-T system (General Electrics Medical Systems) with an eight-element body array coil and application of the fluoroscopic triggering 3D LAVA technique. Overall image quality was assessed on a 5-point scale by two experienced radiologists. All the nodules and blood vessel were recorded and compared. The diagnostic accuracy and feasibility of LAVA were evaluated. Thirty-eight patients gave high quality images of 72 nodules in the liver for diagnosis. The accuracy of LAVA was 97.2% (70/72), and the coincidence rate between the extent of tumor judged by dynamic enhancement and pathological examination was 87.5% (63/72). Displayed by the maximum intensity projection reconstruction, nearly all cases gave satisfactory images of branches III and IV of the hepatic artery. Furthermore, small early-stage enhancing hepatic lesions and the parallel portal vein were also well displayed. Sequence of LAVA provides good multi-phase dynamic enhancement scanning of hepatic lesions. Combined with conventional scanning technology, LAVA effectively and safely displays focal hepatic lesions and the relationship between tumor and normal tissues, especially blood vessels.

  18. Brain MR imaging at ultra-low radiofrequency power.

    PubMed

    Sarkar, Subhendra N; Alsop, David C; Madhuranthakam, Ananth J; Busse, Reed F; Robson, Philip M; Rofsky, Neil M; Hackney, David B

    2011-05-01

    To explore the lower limits for radiofrequency (RF) power-induced specific absorption rate (SAR) achievable at 1.5 T for brain magnetic resonance (MR) imaging without loss of tissue signal or contrast present in high-SAR clinical imaging in order to create a potentially viable MR method at ultra-low RF power to image tissues containing implanted devices. An institutional review board-approved HIPAA-compliant prospective MR study design was used, with written informed consent from all subjects prior to MR sessions. Seven healthy subjects were imaged prospectively at 1.5 T with ultra-low-SAR optimized three-dimensional (3D) fast spin-echo (FSE) and fluid-attenuated inversion-recovery (FLAIR) T2-weighted sequences and an ultra-low-SAR 3D spoiled gradient-recalled acquisition in the steady state T1-weighted sequence. Corresponding high-SAR two-dimensional (2D) clinical sequences were also performed. In addition to qualitative comparisons, absolute signal-to-noise ratios (SNRs) and contrast-to-noise ratios (CNRs) for multicoil, parallel imaging acquisitions were generated by using a Monte Carlo method for quantitative comparison between ultra-low-SAR and high-SAR results. There were minor to moderate differences in the absolute tissue SNR and CNR values and in qualitative appearance of brain images obtained by using ultra-low-SAR and high-SAR techniques. High-SAR 2D T2-weighted imaging produced slightly higher SNR, while ultra-low-SAR 3D technique not only produced higher SNR for T1-weighted and FLAIR images but also higher CNRs for all three sequences for most of the brain tissues. The 3D techniques adopted here led to a decrease in the absorbed RF power by two orders of magnitude at 1.5 T, and still the image quality was preserved within clinically acceptable imaging times. RSNA, 2011

  19. High-speed three-dimensional measurements with a fringe projection-based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther

    2014-11-01

    An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.

  20. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed Central

    Madhyastha, Tara M.; Koh, Natalie; Day, Trevor K. M.; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J.; Rajan, Sabreena; Woelfer, Karl A.; Wolf, Jonathan; Grabowski, Thomas J.

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster. PMID:29163119

  1. Landsat real-time processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, E.L.

    A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less

  2. 48 CFR 215.404-1 - Proposal analysis techniques.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... reliability of its estimating and accounting systems. [63 FR 55040, Oct. 14, 1998, as amended at 71 FR 69494... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Proposal analysis techniques. 215.404-1 Section 215.404-1 Federal Acquisition Regulations System DEFENSE ACQUISITION...

  3. Parallel image logical operations using cross correlation

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1972-01-01

    Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.

  4. A robust multi-shot scan strategy for high-resolution diffusion weighted MRI enabled by multiplexed sensitivity-encoding (MUSE)

    PubMed Central

    Chen, Nan-kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W.

    2013-01-01

    Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3 Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies. PMID:23370063

  5. The Future Combat System: Minimizing Risk While Maximizing Capability

    DTIC Science & Technology

    2000-05-01

    ec /W hl Co nv /T ra ck Co nv /W hl El ec /T rac El ec /W hl Crew &Misc Power Mgt Propulsion Lethality Structure /Surviv Conv / ETC Lethality Missile...also examines the wheeled versus tracked debate. The paper concludes by recommending some of the technologies for further development under a parallel...versus tracked debate. The paper concludes by recommending some of the technologies for further development under a parallel acquisition strategy

  6. Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.

    PubMed

    Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S

    2017-11-01

    To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    NASA Astrophysics Data System (ADS)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    The SENTINEL-1 (S1) mission is designed to provide operational capability for continuous mapping of the Earth thanks to its two polar-orbiting satellites (SENTINEL-1A and B) performing C-band synthetic aperture radar (SAR) imaging. It is, indeed, characterized by enhanced revisit frequency, coverage and reliability for operational services and applications requiring long SAR data time series. Moreover, SENTINEL-1 is specifically oriented to interferometry applications with stringent requirements based on attitude and orbit accuracy and it is intrinsically characterized by small spatial and temporal baselines. Consequently, SENTINEL-1 data are particularly suitable to be exploited through advanced interferometric techniques such as the well-known DInSAR algorithm referred to as Small BAseline Subset (SBAS), which allows the generation of deformation time series and displacement velocity maps. In this work we present an advanced interferometric processing chain, based on the Parallel SBAS (P-SBAS) approach, for the massive processing of S1 Interferometric Wide Swath (IWS) data aimed at generating deformation time series in efficient, automatic and systematic way. Such a DInSAR chain is designed to exploit distributed computing infrastructures, and more specifically Cloud Computing environments, to properly deal with the storage and the processing of huge S1 datasets. In particular, since S1 IWS data are acquired with the innovative Terrain Observation with Progressive Scans (TOPS) mode, we could benefit from the structure of S1 data, which are composed by bursts that can be considered as separate acquisitions. Indeed, the processing is intrinsically parallelizable with respect to such independent input data and therefore we basically exploited this coarse granularity parallelization strategy in the majority of the steps of the SBAS processing chain. Moreover, we also implemented more sophisticated parallelization approaches, exploiting both multi-node and multi-core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  8. Real-time software receiver

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L. (Inventor); Kintner, Jr., Paul M. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor)

    2007-01-01

    A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.

  9. Real-time software receiver

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L. (Inventor); Ledvina, Brent M. (Inventor); Powell, Steven P. (Inventor); Kintner, Jr., Paul M. (Inventor)

    2006-01-01

    A real-time software receiver that executes on a general purpose processor. The software receiver includes data acquisition and correlator modules that perform, in place of hardware correlation, baseband mixing and PRN code correlation using bit-wise parallelism.

  10. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  11. Embedded parallel processing based ground control systems for small satellite telemetry

    NASA Technical Reports Server (NTRS)

    Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.

    1994-01-01

    The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.

  12. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  13. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  14. Wave-CAIPI ViSTa: highly accelerated whole-brain direct myelin water imaging with zero-padding reconstruction.

    PubMed

    Wu, Zhe; Bilgic, Berkin; He, Hongjian; Tong, Qiqi; Sun, Yi; Du, Yiping; Setsompop, Kawin; Zhong, Jianhui

    2018-09-01

    This study introduces a highly accelerated whole-brain direct visualization of short transverse relaxation time component (ViSTa) imaging using a wave controlled aliasing in parallel imaging (CAIPI) technique, for acquisition within a clinically acceptable scan time, with the preservation of high image quality and sufficient spatial resolution, and reduced residual point spread function artifacts. Double inversion RF pulses were applied to preserve the signal from short T 1 components for directly extracting myelin water signal in ViSTa imaging. A 2D simultaneous multislice and a 3D acquisition of ViSTa images incorporating wave-encoding were used for data acquisition. Improvements brought by a zero-padding method in wave-CAIPI reconstruction were also investigated. The zero-padding method in wave-CAIPI reconstruction reduced the root-mean-square errors between the wave-encoded and Cartesian gradient echoes for all wave gradient configurations in simulation, and reduced the side-main lobe intensity ratio from 34.5 to 16% in the thin-slab in vivo ViSTa images. In a 4 × acceleration simultaneous-multislice scenario, wave-CAIPI ViSTa achieved negligible g-factors (g mean /g max  = 1.03/1.10), while retaining minimal interslice artifacts. An 8 × accelerated acquisition of 3D wave-CAIPI ViSTa imaging covering the whole brain with 1.1 × 1.1 × 3 mm 3 voxel size was achieved within 15 minutes, and only incurred a small g-factor penalty (g mean /g max  = 1.05/1.16). Whole-brain ViSTa images were obtained within 15 minutes with negligible g-factor penalty by using wave-CAIPI acquisition and zero-padding reconstruction. The proposed zero-padding method was shown to be effective in reducing residual point spread function for wave-encoded images, particularly for ViSTa. © 2018 International Society for Magnetic Resonance in Medicine.

  15. 8-Channel acquisition system for Time-Correlated Single-Photon Counting.

    PubMed

    Antonioli, S; Miari, L; Cuccato, A; Crotti, M; Rech, I; Ghioni, M

    2013-06-01

    Nowadays, an increasing number of applications require high-performance analytical instruments capable to detect the temporal trend of weak and fast light signals with picosecond time resolution. The Time-Correlated Single-Photon Counting (TCSPC) technique is currently one of the preferable solutions when such critical optical signals have to be analyzed and it is fully exploited in biomedical and chemical research fields, as well as in security and space applications. Recent progress in the field of single-photon detector arrays is pushing research towards the development of high performance multichannel TCSPC systems, opening the way to modern time-resolved multi-dimensional optical analysis. In this paper we describe a new 8-channel high-performance TCSPC acquisition system, designed to be compact and versatile, to be used in modern TCSPC measurement setups. We designed a novel integrated circuit including a multichannel Time-to-Amplitude Converter with variable full-scale range, a D∕A converter, and a parallel adder stage. The latter is used to adapt each converter output to the input dynamic range of a commercial 8-channel Analog-to-Digital Converter, while the integrated DAC implements the dithering technique with as small as possible area occupation. The use of this monolithic circuit made the design of a scalable system of very small dimensions (95 × 40 mm) and low power consumption (6 W) possible. Data acquired from the TCSPC measurement are digitally processed and stored inside an FPGA (Field-Programmable Gate Array), while a USB transceiver allows real-time transmission of up to eight TCSPC histograms to a remote PC. Eventually, the experimental results demonstrate that the acquisition system performs TCSPC measurements with high conversion rate (up to 5 MHz/channel), extremely low differential nonlinearity (<0.04 peak-to-peak of the time bin width), high time resolution (down to 20 ps Full-Width Half-Maximum), and very low crosstalk between channels.

  16. Multisensory architectures for action-oriented perception

    NASA Astrophysics Data System (ADS)

    Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.

    2007-05-01

    In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.

  17. LORAKS makes better SENSE: Phase-constrained partial fourier SENSE reconstruction without phase calibration.

    PubMed

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P

    2017-03-01

    Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. TDRSS telecommunications system, PN code analysis

    NASA Technical Reports Server (NTRS)

    Dixon, R.; Gold, R.; Kaiser, F.

    1976-01-01

    The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.

  19. Optimal exposure techniques for iodinated contrast enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Glick, Stephen J.; Makeev, Andrey

    2016-03-01

    Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.

  20. Learning-related brain hemispheric dominance in sleeping songbirds.

    PubMed

    Moorman, Sanne; Gobes, Sharon M H; van de Kamp, Ferdinand C; Zandbergen, Matthijs A; Bolhuis, Johan J

    2015-03-12

    There are striking behavioural and neural parallels between the acquisition of speech in humans and song learning in songbirds. In humans, language-related brain activation is mostly lateralised to the left hemisphere. During language acquisition in humans, brain hemispheric lateralisation develops as language proficiency increases. Sleep is important for the formation of long-term memory, in humans as well as in other animals, including songbirds. Here, we measured neuronal activation (as the expression pattern of the immediate early gene ZENK) during sleep in juvenile zebra finch males that were still learning their songs from a tutor. We found that during sleep, there was learning-dependent lateralisation of spontaneous neuronal activation in the caudomedial nidopallium (NCM), a secondary auditory brain region that is involved in tutor song memory, while there was right hemisphere dominance of neuronal activation in HVC (used as a proper name), a premotor nucleus that is involved in song production and sensorimotor learning. Specifically, in the NCM, birds that imitated their tutors well were left dominant, while poor imitators were right dominant, similar to language-proficiency related lateralisation in humans. Given the avian-human parallels, lateralised neural activation during sleep may also be important for speech and language acquisition in human infants.

  1. Learning-related brain hemispheric dominance in sleeping songbirds

    PubMed Central

    Moorman, Sanne; Gobes, Sharon M. H.; van de Kamp, Ferdinand C.; Zandbergen, Matthijs A.; Bolhuis, Johan J.

    2015-01-01

    There are striking behavioural and neural parallels between the acquisition of speech in humans and song learning in songbirds. In humans, language-related brain activation is mostly lateralised to the left hemisphere. During language acquisition in humans, brain hemispheric lateralisation develops as language proficiency increases. Sleep is important for the formation of long-term memory, in humans as well as in other animals, including songbirds. Here, we measured neuronal activation (as the expression pattern of the immediate early gene ZENK) during sleep in juvenile zebra finch males that were still learning their songs from a tutor. We found that during sleep, there was learning-dependent lateralisation of spontaneous neuronal activation in the caudomedial nidopallium (NCM), a secondary auditory brain region that is involved in tutor song memory, while there was right hemisphere dominance of neuronal activation in HVC (used as a proper name), a premotor nucleus that is involved in song production and sensorimotor learning. Specifically, in the NCM, birds that imitated their tutors well were left dominant, while poor imitators were right dominant, similar to language-proficiency related lateralisation in humans. Given the avian-human parallels, lateralised neural activation during sleep may also be important for speech and language acquisition in human infants. PMID:25761654

  2. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  3. Parallel and fault-tolerant algorithms for hypercube multiprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykanat, C.

    1988-01-01

    Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less

  4. Access to CAMAC from VxWorks and UNIX in DART

    NASA Astrophysics Data System (ADS)

    Streets, J.; Meadows, J.; Moore, C.; Pordes, R.; Slimmer, D.; Vittone, M.; Stern, E.

    1996-02-01

    As part of the DART Project [Data acquisition for the next Generation Fermilab Fixed Target Experiments] we have developed a package of software for CAMAC access from UNIX and VxWorks platforms, with support for several hardware interfaces. We report on developments for the CES CBD8210 VME to parallel CAMAC, the Hytec VSD2992 VME to serial CAMAC and Jorway 411s SCSI to parallel and serial CAMAC branch drivers, and give a summary of the timings obtained.

  5. Pattern recognition with parallel associative memory

    NASA Technical Reports Server (NTRS)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  6. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  7. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  8. ADAPTIVE REAL-TIME CARDIAC MRI USING PARADISE: VALIDATION BY THE PHYSIOLOGICALLY IMPROVED NCAT PHANTOM

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding (PARADISE) is a dynamic MR imaging scheme that optimally combines parallel imaging and model-based adaptive acquisition. In this work, we propose the application of PARADISE to real-time cardiac MRI. We introduce a physiologically improved version of a realistic four-dimensional cardiac-torso (NCAT) phantom, which incorporates natural beat-to-beat heart rate and motion variations. Cardiac cine imaging using PARADISE is simulated and its performance is analyzed by virtue of the improved phantom. Results verify the effectiveness of PARADISE for high resolution un-gated real-time cardiac MRI and its superiority over conventional acquisition methods. PMID:24398475

  9. Impacts of Vocabulary Acquisition Techniques Instruction on Students' Learning

    ERIC Educational Resources Information Center

    Orawiwatnakul, Wiwat

    2011-01-01

    The objectives of this study were to determine how the selected vocabulary acquisition techniques affected the vocabulary ability of 35 students who took EN 111 and investigate their attitudes towards the techniques instruction. The research study was one-group pretest and post-test design. The instruments employed were in-class exercises…

  10. Free-breathing diffusion-weighted single-shot echo-planar MR imaging using parallel imaging (GRAPPA 2) and high b value for the detection of primary rectal adenocarcinoma.

    PubMed

    Soyer, Philippe; Lagadec, Matthieu; Sirol, Marc; Dray, Xavier; Duchat, Florent; Vignaud, Alexandre; Fargeaudou, Yann; Placé, Vinciane; Gault, Valérie; Hamzi, Lounis; Pocard, Marc; Boudiaf, Mourad

    2010-02-11

    Our objective was to determine the diagnostic accuracy of a free-breathing diffusion-weighted single-shot echo-planar magnetic resonance imaging (FBDW-SSEPI) technique with parallel imaging and high diffusion factor value (b = 1000 s/mm2) in the detection of primary rectal adenocarcinomas. Thirty-one patients (14M and 17F; mean age 67 years) with histopathologically proven primary rectal adenocarcinomas and 31 patients without rectal malignancies (14M and 17F; mean age 63.6 years) were examined with FBDW-SSEPI (repetition time (TR/echo time (TE) 3900/91 ms, gradient strength 45 mT/m, acquisition time 2 min) at 1.5 T using generalized autocalibrating partially parallel acquisitions (GRAPPA, acceleration factor 2) and a b value of 1000 s/mm2. Apparent diffusion coefficients (ADCs) of rectal adenocarcinomas and normal rectal wall were measured. FBDW-SSEPI images were evaluated for tumour detection by 2 readers. Sensitivity, specificity, accuracy and Youden score for rectal adenocarcinoma detection were calculated with their 95% confidence intervals (CI) for ADC value measurement and visual image analysis. Rectal adenocarcinomas had significantly lower ADCs (mean 1.036 x 10(-3)+/- 0.107 x 10(-3) mm2/s; median 1.015 x 10(-3) mm2/s; range (0.827-1.239) x 10(-3) mm2/s) compared with the rectal wall of control subjects (mean 1.387 x 10(-3)+/- 0.106 x 10(-3) mm2/s; median 1.385 x 10(-3) mm2/s; range (1.176-1.612) x 10(-3) mm2/s) (p < 0.0001). Using a threshold value < or = 1.240 x 10(-3) mm2/s, all rectal adenocarcinomas were correctly categorized and 100% sensitivity (31/31; 95% CI 95-100%), 94% specificity (31/33; 95% CI 88-100%), 97% accuracy (60/62; 95% CI 92-100%) and Youden index 0.94 were obtained for the diagnosis of rectal adenocarcinoma. FBDW-SSEPI image analysis allowed depiction of all rectal adenocarcinomas but resulted in 2 false-positive findings, yielding 100% sensitivity (31/31; 95% CI 95-100%), 94% specificity (31/33; 95% CI 88-100%), 97% accuracy (60/62; 95% CI 92-100%) and Youden index 0.94 for the diagnosis of primary rectal adenocarcinoma. We can conclude that FBDW-SSEPI using parallel imaging and high b value may be helpful in the detection of primary rectal adenocarcinomas.

  11. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  12. Optical distributed sensors for feedback control: Characterization of photorefractive resonator

    NASA Technical Reports Server (NTRS)

    Indebetouw, Guy; Lindner, D. K.

    1992-01-01

    The aim of the project was to explore, define, and assess the possibilities of optical distributed sensing for feedback control. This type of sensor, which may have some impacts in the dynamic control of deformable structures and the monitoring of small displacements, can be divided into data acquisition, data processing, and control design. Analogue optical techniques, because they are noninvasive and afford massive parallelism may play a significant role in the acquisition and the preprocessing of the data for such a sensor. Assessing these possibilities was the aim of the first stage of this project. The scope of the proposed research was limited to: (1) the characterization of photorefractive resonators and the assessment of their possible use as a distributed optical processing element; and (2) the design of a control system utilizing signals from distributed sensors. The results include a numerical and experimental study of the resonator below threshold, an experimental study of the effect of the resonator's transverse confinement on its dynamics above threshold, a numerical study of the resonator above threshold using a modal expansion approach, and the experimental test of this model. A detailed account of each investigation, including methodology and analysis of the results are also included along with reprints of published and submitted papers.

  13. Production of yarns composed of oriented nanofibers for ophthalmological implants

    NASA Astrophysics Data System (ADS)

    Shynkarenko, A.; Klapstova, A.; Krotov, A.; Moucka, M.; Lukas, D.

    2017-10-01

    Parallelized nanofibrous structures are commonly used in medical sector, especially for the ophthalmological implants. In this research self-fabricated device is tested for improved collection and twisting of the parallel nanofibers. Previously manual techniques are used to collect the nanofibers and then twist is given, where as in our device different parameters can be optimized to obtained parallel nanofibers and further twisting can be given. The device is used to bring automation to the technique of achieving parallel fibrous structures for medical applications.

  14. Simplified Parallel Domain Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less

  15. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  16. High-speed femtosecond pump-probe spectroscopy with a smart pixel detector array.

    PubMed

    Bourquin, S; Prasankumar, R P; Kärtner, F X; Fujimoto, J G; Lasser, T; Salathé, R P

    2003-09-01

    A new femtosecond pump-probe spectroscopy technique is demonstrated that permits the high-speed, parallel acquisition of pump-probe measurements at multiple wavelengths. This is made possible by use of a novel, two-dimensional smart pixel detector array that performs amplitude demodulation in real time on each pixel. This detector array can not only achieve sensitivities comparable with lock-in amplification but also simultaneously performs demodulation of probe transmission signals at multiple wavelengths, thus permitting rapid time- and wavelength-resolved femtosecond pump-probe spectroscopy. Measurements on a thin sample of bulk GaAs are performed across 58 simultaneous wavelengths. Differential probe transmission changes as small as approximately 2 x 10(-4) can be measured over a 5-ps delay scan in only approximately 3 min. This technology can be applied to a wide range of pump-probe measurements in condensed matter, chemistry, and biology.

  17. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Silicon photon-counting avalanche diodes for single-molecule fluorescence spectroscopy

    PubMed Central

    Michalet, Xavier; Ingargiola, Antonino; Colyer, Ryan A.; Scalia, Giuseppe; Weiss, Shimon; Maccagnani, Piera; Gulinatti, Angelo; Rech, Ivan; Ghioni, Massimo

    2014-01-01

    Solution-based single-molecule fluorescence spectroscopy is a powerful experimental tool with applications in cell biology, biochemistry and biophysics. The basic feature of this technique is to excite and collect light from a very small volume and work in a low concentration regime resulting in rare burst-like events corresponding to the transit of a single molecule. Detecting photon bursts is a challenging task: the small number of emitted photons in each burst calls for high detector sensitivity. Bursts are very brief, requiring detectors with fast response time and capable of sustaining high count rates. Finally, many bursts need to be accumulated to achieve proper statistical accuracy, resulting in long measurement time unless parallelization strategies are implemented to speed up data acquisition. In this paper we will show that silicon single-photon avalanche diodes (SPADs) best meet the needs of single-molecule detection. We will review the key SPAD parameters and highlight the issues to be addressed in their design, fabrication and operation. After surveying the state-of-the-art SPAD technologies, we will describe our recent progress towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. The potential of this approach is illustrated with single-molecule Förster resonance energy transfer measurements. PMID:25309114

  19. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    PubMed

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  20. Optimizing transformations of stencil operations for parallel cache-based architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassetti, F.; Davis, K.

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less

  1. Whole-body nonenhanced PET/MR versus PET/CT in the staging and restaging of cancers: preliminary observations.

    PubMed

    Huellner, Martin W; Appenzeller, Philippe; Kuhn, Félix P; Husmann, Lars; Pietsch, Carsten M; Burger, Irene A; Porto, Miguel; Delso, Gaspar; von Schulthess, Gustav K; Veit-Haibach, Patrick

    2014-12-01

    To assess the diagnostic performance of whole-body non-contrast material-enhanced positron emission tomography (PET)/magnetic resonance (MR) imaging and PET/computed tomography (CT) for staging and restaging of cancers and provide guidance for modality and sequence selection. This study was approved by the institutional review board and national government authorities. One hundred six consecutive patients (median age, 68 years; 46 female and 60 male patients) referred for staging or restaging of oncologic malignancies underwent whole-body imaging with a sequential trimodality PET/CT/MR system. The MR protocol included short inversion time inversion-recovery ( STIR short inversion time inversion-recovery ), Dixon-type liver accelerated volume acquisition ( LAVA liver accelerated volume acquisition ; GE Healthcare, Waukesha, Wis), and respiratory-gated periodically rotated overlapping parallel lines with enhanced reconstruction ( PROPELLER periodically rotated overlapping parallel lines with enhanced reconstruction ; GE Healthcare) sequences. Primary tumors (n = 43), local lymph node metastases (n = 74), and distant metastases (n = 66) were evaluated for conspicuity (scored 0-4), artifacts (scored 0-2), and reader confidence on PET/CT and PET/MR images. Subanalysis for lung lesions (n = 46) was also performed. Relevant incidental findings with both modalities were compared. Interreader agreement was analyzed with intraclass correlation coefficients and κ statistics. Lesion conspicuity, image artifacts, and incidental findings were analyzed with nonparametric tests. Primary tumors were less conspicuous on STIR short inversion time inversion-recovery (3.08, P = .016) and LAVA liver accelerated volume acquisition (2.64, P = .002) images than on CT images (3.49), while findings with the PROPELLER periodically rotated overlapping parallel lines with enhanced reconstruction sequence (3.70, P = .436) were comparable to those at CT. In distant metastases, the PROPELLER periodically rotated overlapping parallel lines with enhanced reconstruction sequence (3.84) yielded better results than CT (2.88, P < .001). Subanalysis for lung lesions yielded similar results (primary lung tumors: CT, 3.71; STIR short inversion time inversion-recovery , 3.32 [P = .014]; LAVA liver accelerated volume acquisition , 2.52 [P = .002]; PROPELLER periodically rotated overlapping parallel lines with enhanced reconstruction , 3.64 [P = .546]). Readers classified lesions more confidently with PET/MR than PET/CT. However, PET/CT showed more incidental findings than PET/MR (P = .039), especially in the lung (P < .001). MR images had more artifacts than CT images. PET/MR performs comparably to PET/CT in whole-body oncology and neoplastic lung disease, with the use of appropriate sequences. Further studies are needed to define regionalized PET/MR protocols with sequences tailored to specific tumor entities. © RSNA, 2014 Online supplemental material is available for this article.

  2. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    DOT National Transportation Integrated Search

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  3. A Markov chain technique for determining the acquisition behavior of a digital tracking loop

    NASA Technical Reports Server (NTRS)

    Chadwick, H. D.

    1972-01-01

    An iterative procedure is presented for determining the acquisition behavior of discrete or digital implementations of a tracking loop. The technique is based on the theory of Markov chains and provides the cumulative probability of acquisition in the loop as a function of time in the presence of noise and a given set of initial condition probabilities. A digital second-order tracking loop to be used in the Viking command receiver for continuous tracking of the command subcarrier phase was analyzed using this technique, and the results agree closely with experimental data.

  4. Putting It All Together.

    ERIC Educational Resources Information Center

    McNamara, Elizabeth T.; Grant, Cathy Miles; Wasser, Judith Davidson

    1998-01-01

    Discusses the parallel between the rapid increase in the acquisition of computer technology and electronic networks by schools and systemic reform movements. Provides some insight on building a school and the community planning process to support technology implementation, connecting content to technology, professional development, and training…

  5. Exploiting Symmetry on Parallel Architectures.

    NASA Astrophysics Data System (ADS)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  6. Feasibility of through-time spiral generalized autocalibrating partial parallel acquisition for low latency accelerated real-time MRI of speech.

    PubMed

    Lingala, Sajan Goud; Zhu, Yinghua; Lim, Yongwan; Toutios, Asterios; Ji, Yunhua; Lo, Wei-Ching; Seiberlich, Nicole; Narayanan, Shrikanth; Nayak, Krishna S

    2017-12-01

    To evaluate the feasibility of through-time spiral generalized autocalibrating partial parallel acquisition (GRAPPA) for low-latency accelerated real-time MRI of speech. Through-time spiral GRAPPA (spiral GRAPPA), a fast linear reconstruction method, is applied to spiral (k-t) data acquired from an eight-channel custom upper-airway coil. Fully sampled data were retrospectively down-sampled to evaluate spiral GRAPPA at undersampling factors R = 2 to 6. Pseudo-golden-angle spiral acquisitions were used for prospective studies. Three subjects were imaged while performing a range of speech tasks that involved rapid articulator movements, including fluent speech and beat-boxing. Spiral GRAPPA was compared with view sharing, and a parallel imaging and compressed sensing (PI-CS) method. Spiral GRAPPA captured spatiotemporal dynamics of vocal tract articulators at undersampling factors ≤4. Spiral GRAPPA at 18 ms/frame and 2.4 mm 2 /pixel outperformed view sharing in depicting rapidly moving articulators. Spiral GRAPPA and PI-CS provided equivalent temporal fidelity. Reconstruction latency per frame was 14 ms for view sharing and 116 ms for spiral GRAPPA, using a single processor. Spiral GRAPPA kept up with the MRI data rate of 18ms/frame with eight processors. PI-CS required 17 minutes to reconstruct 5 seconds of dynamic data. Spiral GRAPPA enabled 4-fold accelerated real-time MRI of speech with a low reconstruction latency. This approach is applicable to wide range of speech RT-MRI experiments that benefit from real-time feedback while visualizing rapid articulator movement. Magn Reson Med 78:2275-2282, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    NASA Astrophysics Data System (ADS)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  8. Minimum envelope roughness pulse design for reduced amplifier distortion in parallel excitation.

    PubMed

    Grissom, William A; Kerr, Adam B; Stang, Pascal; Scott, Greig C; Pauly, John M

    2010-11-01

    Parallel excitation uses multiple transmit channels and coils, each driven by independent waveforms, to afford the pulse designer an additional spatial encoding mechanism that complements gradient encoding. In contrast to parallel reception, parallel excitation requires individual power amplifiers for each transmit channel, which can be cost prohibitive. Several groups have explored the use of low-cost power amplifiers for parallel excitation; however, such amplifiers commonly exhibit nonlinear memory effects that distort radio frequency pulses. This is especially true for pulses with rapidly varying envelopes, which are common in parallel excitation. To overcome this problem, we introduce a technique for parallel excitation pulse design that yields pulses with smoother envelopes. We demonstrate experimentally that pulses designed with the new technique suffer less amplifier distortion than unregularized pulses and pulses designed with conventional regularization.

  9. Code Optimization and Parallelization on the Origins: Looking from Users' Perspective

    NASA Technical Reports Server (NTRS)

    Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)

    2002-01-01

    Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.

  10. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  11. Quasi-parallel precession diffraction: Alignment method for scanning transmission electron microscopes.

    PubMed

    Plana-Ruiz, S; Portillo, J; Estradé, S; Peiró, F; Kolb, Ute; Nicolopoulos, S

    2018-06-06

    A general method to set illuminating conditions for selectable beam convergence and probe size is presented in this work for Transmission Electron Microscopes (TEM) fitted with µs/pixel fast beam scanning control, (S)TEM, and an annular dark field detector. The case of interest of beam convergence and probe size, which enables diffraction pattern indexation, is then used as a starting point in this work to add 100 Hz precession to the beam while imaging the specimen at a fast rate and keeping the projector system in diffraction mode. The described systematic alignment method for the adjustment of beam precession on the specimen plane while scanning at fast rates is mainly based on the sharpness of the precessed STEM image. The complete alignment method for parallel condition and precession, Quasi-Parallel PED-STEM, is presented in block diagram scheme, as it has been tested on a variety of instruments. The immediate application of this methodology is that it renders the TEM column ready for the acquisition of Precessed Electron Diffraction Tomographies (EDT) as well as for the acquisition of slow Precessed Scanning Nanometer Electron Diffraction (SNED). Examples of the quality of the Precessed Electron Diffraction (PED) patterns and PED-STEM alignment images are presented with corresponding probe sizes and convergence angles. Copyright © 2018. Published by Elsevier B.V.

  12. Development of an Integrated Data Acquisition System for a Small Flight Probe

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Empey, Daniel M.; Skokova, Kristina A.; Venkatapathy, Ethiraj

    2012-01-01

    In support of the SPRITE concept, an integrated data acquisition system has been developed and fabricated for preliminary testing. The data acquisition system has been designed to condition traditional thermal protection system sensors, store their data to an on-board memory card, and in parallel, telemeter to an external system. In the fall of 2010, this system was integrated into a 14 in. diameter, 45 degree sphere cone probe instrumented with thermal protection system sensors. This system was then tested at the NASA Ames Research Center Aerodynamic Heating Facility's arc jet at approximately 170 W/sq. cm. The first test in December 2010 highlighted hardware design issues that were redesigned and implemented leading to a successful test in February 2011.

  13. A role for the developing lexicon in phonetic category acquisition

    PubMed Central

    Feldman, Naomi H.; Griffiths, Thomas L.; Goldwater, Sharon; Morgan, James L.

    2013-01-01

    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PMID:24219848

  14. AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159

  15. Further Evidence on the Effect of Acquisition Policy and Process on Cost Growth

    DTIC Science & Technology

    2016-04-30

    bust periods. A complete summary also would need to take into account parallel analyses for the boom periods and the comparisons of cost growth in...qÜáêíÉÉåíÜ=^ååì~ä= ^Åèìáëáíáçå=oÉëÉ~êÅÜ= póãéçëáìã= tÉÇåÉëÇ~ó=pÉëëáçåë= sçäìãÉ=f= = Further Evidence on the Effect of Acquisition Policy and Process on Cost ...Goeller, Defense Acquisition Analyst, Institute for Defense Analyses Stanley Horowitz, Assistant Director, Cost Analysis and Research Division

  16. Revise and resubmit: How real-time parsing limitations influence grammar acquisition

    PubMed Central

    Pozzan, Lucia; Trueswell, John C.

    2015-01-01

    We present the results from a three-day artificial language learning study on adults. The study examined whether sentence-parsing limitations, in particular, difficulties revising initial syntactic/semantic commitments during comprehension, shape learners’ ability to acquire a language. Findings show that both comprehension and production of morphology pertaining to sentence argument structure are delayed when this morphology consistently appears at the end, rather than at the beginning, of sentences in otherwise identical grammatical systems. This suggests that real-time processing constraints impact acquisition; morphological cues that tend to guide linguistic analyses are easier to learn than cues that revise these analyses. Parallel performance in production and comprehension indicates that parsing constraints affect grammatical acquisition, not just real-time commitments. Properties of the linguistic system (e.g., ordering of cues within a sentence) interact with the properties of the cognitive system (cognitive control and conflict-resolution abilities) and together affect language acquisition. PMID:26026607

  17. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  18. Climate Change: A "Green" Approach to Teaching Contemporary Germany

    ERIC Educational Resources Information Center

    Melin, Charlotte

    2013-01-01

    This article describes a newly designed upper division German language course, "Contemporary Germany: Food, Energy Politics," and two sampling methods of assessment for measuring parallel gains in German skills and sustainable development (SD) thinking. Second Language Acquisition (SLA) informed course design, key assignments, and…

  19. Phonological Sensitivity: A Quasi-Parallel Progression of Word Structure Units and Cognitive Operations.

    ERIC Educational Resources Information Center

    Anthony, Jason L.; Lonigan, Christopher J.; Driscoll, Kimberly; Phillips, Beth M.; Burgess, Stephen R.

    2003-01-01

    Investigates the order of acquisition of phonological sensitivity skills among preschool and kindergarten children. Supports a developmental conceptualization of phonological sensitivity. Discusses findings in relation to their implications for improving assessment, early literacy instruction, and prevention of reading difficulties. (SG)

  20. Symbiotic Nitrogen Fixation in the Fungus Gardens of Leaf-Cutter Ants

    USDA-ARS?s Scientific Manuscript database

    Bacteria-mediated acquisition of atmospheric dinitrogen by plants serves as a critical nitrogen source in terrestrial ecosystems, and through its key role in agriculture, this phenomenon has shaped the development of human civilizations. Here we show that, paralleling human agriculture, cultivation ...

  1. Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning

    ERIC Educational Resources Information Center

    Weidemann, Gabrielle; Antees, Cassandra

    2012-01-01

    There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…

  2. Japan Report, Science and Technology.

    DTIC Science & Technology

    1987-04-10

    than 0.1 yg/m£. However, several Candida genus yeasts, as well as C. maltosa, possess a cycloheximide resistance and in the case of C. maltosa...Suppose acquisition of the technique is the objective, whether acquisition of such stereotyped techniques is meaningful or not is questionable

  3. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    PubMed

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  4. Software Defined GPS API: Development and Implementation of GPS Correlator Architectures Using MATLAB with Focus on SDR Implementations

    DTIC Science & Technology

    2014-05-18

    intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques...with the intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved...intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques to

  5. 48 CFR 1631.203-70 - Allocation techniques.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... among cost centers at the initial entry into the cost accounting system shall be made in compliance with... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Allocation techniques. 1631.203-70 Section 1631.203-70 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT...

  6. A three-way parallel ICA approach to analyze links among genetics, brain structure and brain function.

    PubMed

    Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu

    2014-09-01

    Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Frame Rate Considerations for Real-Time Abdominal Acoustic Radiation Force Impulse Imaging

    PubMed Central

    Fahey, Brian J.; Palmeri, Mark L.; Trahey, Gregg E.

    2008-01-01

    With the advent of real-time Acoustic Radiation Force Impulse (ARFI) imaging, elevated frame rates are both desirable and relevant from a clinical perspective. However, fundamental limitations on frame rates are imposed by thermal safety concerns related to incident radiation force pulses. Abdominal ARFI imaging utilizes a curvilinear scanning geometry that results in markedly different tissue heating patterns than those previously studied for linear arrays or mechanically-translated concave transducers. Finite Element Method (FEM) models were used to simulate these tissue heating patterns and to analyze the impact of tissue heating on frame rates available for abdominal ARFI imaging. A perfusion model was implemented to account for cooling effects due to blood flow and frame rate limitations were evaluated in the presence of normal, reduced and negligible tissue perfusions. Conventional ARFI acquisition techniques were also compared to ARFI imaging with parallel receive tracking in terms of thermal efficiency. Additionally, thermocouple measurements of transducer face temperature increases were acquired to assess the frame rate limitations imposed by cumulative heating of the imaging array. Frame rates sufficient for many abdominal imaging applications were found to be safely achievable utilizing available ARFI imaging techniques. PMID:17521042

  8. Current Concepts in Hip Preservation Surgery

    PubMed Central

    Adler, Kelly L.; Cook, P. Christopher; Geisler, Paul R.; Yen, Yi-Meng; Giordano, Brian D.

    2016-01-01

    Context: Successful treatment of nonarthritic hip pain in young athletic individuals remains a challenge. A growing fund of clinical knowledge has paralleled technical innovations that have enabled hip preservation surgeons to address a multitude of structural variations of the proximal femur and acetabulum and concomitant intra-articular joint pathology. Often, a combination of open and arthroscopic techniques are necessary to treat more complex pathomorphologies. Peri- and postoperative recovery after such procedures can pose a substantial challenge to the patient, and a dedicated, thoughtful approach may reduce setbacks, limit morbidity, and help optimize functional outcomes. Evidence Acquisition: PubMed and CINAHL databases were searched to identify relevant scientific and review articles through December 2014 using the search terms hip preservation, labrum, surgical dislocation, femoroacetabular impingement, postoperative rehabilitation, peri-acetabular osteotomy, and rotational osteotomy. Reference lists of included articles were reviewed to locate additional references of interest. Study Design: Clinical review. Level of Evidence: Level 4. Results: Hip preservation procedures and appropriate rehabilitation have allowed individuals to return to a physically active lifestyle. Conclusion: Effective postoperative rehabilitation must consider modifications and precautions specific to the particular surgical techniques used. Proper postoperative rehabilitation after hip preservation surgery may help optimize functional recovery and maximize clinical success and patient satisfaction. PMID:26733593

  9. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  10. Assessment of Techniques for Evaluating Computer Systems for Federal Agency Procurements. Final Report.

    ERIC Educational Resources Information Center

    Letmanyi, Helen

    Developed to identify and qualitatively assess computer system evaluation techniques for use during acquisition of general purpose computer systems, this document presents several criteria for comparison and selection. An introduction discusses the automatic data processing (ADP) acquisition process and the need to plan for uncertainty through…

  11. 48 CFR 915.404-4-70-7 - Alternative techniques.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Alternative techniques. 915.404-4-70-7 Section 915.404-4-70-7 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 915.404-4-70-7 Alternative...

  12. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  13. Cognitive and Sociocultural Perspectives: Two Parallel SLA Worlds?

    ERIC Educational Resources Information Center

    Zuengler, Jane; Miller, Elizabeth R.

    2006-01-01

    Looking back at the past 15 years in the field of second language acquisition (SLA), the authors select and discuss several important developments. One is the impact of various sociocultural perspectives such as Vygotskian sociocultural theory, language socialization, learning as changing participation in situated practices, Bakhtin and the…

  14. Data acquisition and processing system for the HT-6M tokamak fusion experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Y.T.; Liu, G.C.; Pang, J.Q.

    1987-08-01

    This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less

  15. Stand-alone digital data storage control system including user control interface

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)

    1994-01-01

    A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.

  16. Fast inner-volume imaging of the lumbar spine with a spatially focused excitation using a 3D-TSE sequence.

    PubMed

    Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Paul, Dominik; Kannengiesser, Stephan; Schoenberg, Stefan O; Haneder, Stefan

    2015-04-01

    The purpose of this study was to evaluate the feasibility and technical quality of a zoomed three-dimensional (3D) turbo spin-echo (TSE) sampling perfection with application optimized contrasts using different flip-angle evolutions (SPACE) sequence of the lumbar spine. In this prospective feasibility study, nine volunteers underwent a 3-T magnetic resonance examination of the lumbar spine including 1) a conventional 3D T2-weighted (T2w) SPACE sequence with generalized autocalibrating partially parallel acquisition technique acceleration factor 2 and 2) a zoomed 3D T2w SPACE sequence with a reduced field of view (reduction factor 2). Images were evaluated with regard to image sharpness, signal homogeneity, and the presence of artifacts by two experienced radiologists. For quantitative analysis, signal-to-noise ratio (SNR) values were calculated. Image sharpness of anatomic structures was statistically significantly greater with zoomed SPACE (P < .0001), whereas the signal homogeneity was statistically significantly greater with conventional SPACE (cSPACE; P = .0003). There were no statistically significant differences in extent of artifacts. Acquisition times were 8:20 minutes for cSPACE and 6:30 minutes for zoomed SPACE. Readers 1 and 2 selected zSPACE as the preferred sequence in five of nine cases. In two of nine cases, both sequences were rated as equally preferred by both the readers. SNR values were statistically significantly greater with cSPACE. In comparison to a cSPACE sequences, zoomed SPACE imaging of the lumbar spine provides sharper images in conjunction with a 25% reduction in acquisition time. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  17. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  18. Acquisition of dental skills in preclinical technique courses: influence of spatial and manual abilities.

    PubMed

    Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo

    2016-10-01

    Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two consecutive preclinical technique courses. We measured spatial and manual abilities of applicants to Hamburg Dental School by means of a multiple choice test on Technical Aptitude and a wire-bending test, respectively. Preclinical dental technique tasks were categorized as consistent-simple and inconsistent-complex based on their contents. For analysis, we used robust regression to circumvent typical limitations in dental studies like small sample size and non-normal residual distributions. We found that manual, but not spatial ability exhibited a moderate influence on the performance in consistent-simple tasks during dental skill acquisition in preclinical dentistry. Both abilities revealed a moderate relation with the performance in inconsistent-complex tasks. These findings support the hypotheses which we had postulated on the basis of Ackerman's work. Therefore, spatial as well as manual ability are required for the acquisition of dental skills in preclinical technique courses. These results support the view that both abilities should be addressed in dental admission procedures in addition to cognitive measures.

  19. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  20. The application of remote sensing techniques to selected inter and intra urban data acquisition problems

    NASA Technical Reports Server (NTRS)

    Horton, F. E.

    1970-01-01

    The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.

  1. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  2. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument. The coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. Static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype are described in the last part of this thesis. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed. Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about six months. Since current sensitivity of advanced LIGO is already more than a factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, the upcoming science runs have a good chance for the first direct detection of gravitational waves.

  3. High-efficiency integrated readout circuit for single photon avalanche diode arrays in fluorescence lifetime imaging.

    PubMed

    Acconcia, G; Cominelli, A; Rech, I; Ghioni, M

    2016-11-01

    In recent years, lifetime measurements by means of the Time Correlated Single Photon Counting (TCSPC) technique have led to a significant breakthrough in medical and biological fields. Unfortunately, the many advantages of TCSPC-based approaches come along with the major drawback of a relatively long acquisition time. The exploitation of multiple channels in parallel could in principle mitigate this issue, and at the same time it opens the way to a multi-parameter analysis of the optical signals, e.g., as a function of wavelength or spatial coordinates. The TCSPC multichannel solutions proposed so far, though, suffer from a tradeoff between number of channels and performance, and the overall measurement speed has not been increased according to the number of channels, thus reducing the advantages of having a multichannel system. In this paper, we present a novel readout architecture for bi-dimensional, high-density Single Photon Avalanche Diode (SPAD) arrays, specifically designed to maximize the throughput of the whole system and able to guarantee an efficient use of resources. The core of the system is a routing logic that can provide a dynamic connection between a large number of SPAD detectors and a much lower number of high-performance acquisition channels. A key feature of our smart router is its ability to guarantee high efficiency under any operating condition.

  4. 48 CFR 15.102 - Oral presentations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Oral presentations. 15.102 Section 15.102 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 15.102 Oral...

  5. Microscope mode secondary ion mass spectrometry imaging with a Timepix detector.

    PubMed

    Kiss, Andras; Jungmann, Julia H; Smith, Donald F; Heeren, Ron M A

    2013-01-01

    In-vacuum active pixel detectors enable high sensitivity, highly parallel time- and space-resolved detection of ions from complex surfaces. For the first time, a Timepix detector assembly was combined with a secondary ion mass spectrometer for microscope mode secondary ion mass spectrometry (SIMS) imaging. Time resolved images from various benchmark samples demonstrate the imaging capabilities of the detector system. The main advantages of the active pixel detector are the higher signal-to-noise ratio and parallel acquisition of arrival time and position. Microscope mode SIMS imaging of biomolecules is demonstrated from tissue sections with the Timepix detector.

  6. Psycholinguistic Techniques and Resources in Second Language Acquisition Research

    ERIC Educational Resources Information Center

    Roberts, Leah

    2012-01-01

    In this article, a survey of current psycholinguistic techniques relevant to second language acquisition (SLA) research is presented. I summarize many of the available methods and discuss their use with particular reference to two critical questions in current SLA research: (1) What does a learner's current knowledge of the second language (L2)…

  7. Spacing Techniques in Second Language Vocabulary Acquisition: Short-Term Gains vs. Long-Term Memory

    ERIC Educational Resources Information Center

    Schuetze, Ulf

    2015-01-01

    This article reports the results of two experiments using the spacing technique (Leitner, 1972; Landauer & Bjork, 1978) in second language vocabulary acquisition. In the past, studies in this area have produced mixed results attempting to differentiate between massed, uniform and expanded intervals of spacing (Balota, Duchek, & Logan,…

  8. Continuing Medical Education-Driven Skills Acquisition and Impact on Improved Patient Outcomes in Family Practice Setting.

    ERIC Educational Resources Information Center

    Bellamy, Nicholas; Goldstein, Laurence D.; Tekanoff, Rory A.

    2000-01-01

    Family practitioners (n=474) accompanied by their patients were trained in injection techniques to treat osteoarthritis. Pre- and postsession assessments showed that physicians felt comfortable with the new technique, skill acquisition occurred in a supportive setting for physicians and patients, and many patients experienced significant health…

  9. SU-F-J-220: Micro-CT Based Quantification of Mouse Brain Vasculature: The Effects of Acquisition Technique and Contrast Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipton, C; Lamba, M; Qi, Z

    Purpose: Cognitive impairment from radiation therapy to the brain may be linked to the loss of total blood volume in the brain. To account for brain injury, it is crucial to develop an understanding of blood volume loss as a result of radiation therapy. This study investigates µCT based quantification of mouse brain vasculature, focusing on the effect of acquisition technique and contrast material. Methods: Four mice were scanned on a µCT scanner (Siemens Inveon). The reconstructed voxel size was 18µm3 and all protocols were Hounsfield Unit (HU) calibrated. The mice were injected with 40mg of gold nanoparticles (MediLumine) ormore » 100µl of Exitron 12000 (Miltenyi Biotec). Two acquisition techniques were also performed. A single kVp technique scanned the mouse once using an x-ray beam of 80kVp and segmentation was completed based on a threshold of HU values. The dual kVp technique scanned the mouse twice using 50kVp and 80kVp, this segmentation was based on the ratio of the HU value of the two kVps. After image reconstruction and segmentation, the brain blood volume was determined as a percentage of the total brain volume. Results: For the single kVp acquisition at 80kVp, the brain blood volume had an average of 3.5% for gold and 4.0% for Exitron 12000. Also at 80kVp, the contrast-noise ratio was significantly better for images acquired with the gold nanoparticles (2.0) than for those acquired with the Exitron 12000 (1.4). The dual kVp acquisition shows improved separation of skull from vasculature, but increased image noise. Conclusion: In summary, the effects of acquisition technique and contrast material for quantification of mouse brain vasculature showed that gold nanoparticles produced more consistent segmentation of brain vasculature than Exitron 12000. Also, dual kVp acquisition may improve the accuracy of brain vasculature quantification, although the effect of noise amplification warrants further study.« less

  10. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  11. Highly accelerated cardiovascular MR imaging using many channel technology: concepts and clinical applications

    PubMed Central

    Sodickson, Daniel K.

    2010-01-01

    Cardiovascular magnetic resonance imaging (CVMRI) is of proven clinical value in the non-invasive imaging of cardiovascular diseases. CVMRI requires rapid image acquisition, but acquisition speed is fundamentally limited in conventional MRI. Parallel imaging provides a means for increasing acquisition speed and efficiency. However, signal-to-noise (SNR) limitations and the limited number of receiver channels available on most MR systems have in the past imposed practical constraints, which dictated the use of moderate accelerations in CVMRI. High levels of acceleration, which were unattainable previously, have become possible with many-receiver MR systems and many-element, cardiac-optimized RF-coil arrays. The resulting imaging speed improvements can be exploited in a number of ways, ranging from enhancement of spatial and temporal resolution to efficient whole heart coverage to streamlining of CVMRI work flow. In this review, examples of these strategies are provided, following an outline of the fundamentals of the highly accelerated imaging approaches employed in CVMRI. Topics discussed include basic principles of parallel imaging; key requirements for MR systems and RF-coil design; practical considerations of SNR management, supported by multi-dimensional accelerations, 3D noise averaging and high field imaging; highly accelerated clinical state-of-the art cardiovascular imaging applications spanning the range from SNR-rich to SNR-limited; and current trends and future directions. PMID:17562047

  12. Evaluation of dual-source parallel RF excitation for diffusion-weighted whole-body MR imaging with background body signal suppression at 3.0 T.

    PubMed

    Mürtz, Petra; Kaschner, Marius; Träber, Frank; Kukuk, Guido M; Büdenbender, Sarah M; Skowasch, Dirk; Gieseke, Jürgen; Schild, Hans H; Willinek, Winfried A

    2012-11-01

    To evaluate the use of dual-source parallel RF excitation (TX) for diffusion-weighted whole-body MRI with background body signal suppression (DWIBS) at 3.0 T. Forty consecutive patients were examined on a clinical 3.0-T MRI system using a diffusion-weighted (DW) spin-echo echo-planar imaging sequence with a combination of short TI inversion recovery and slice-selective gradient reversal fat suppression. DWIBS of the neck (n=5), thorax (n=8), abdomen (n=6) and pelvis (n=21) was performed both with TX (2:56 min) and with standard single-source RF excitation (4:37 min). The quality of DW images and reconstructed inverted maximum intensity projections was visually judged by two readers (blinded to acquisition technique). Signal homogeneity and fat suppression were scored as "improved", "equal", "worse" or "ambiguous". Moreover, the apparent diffusion coefficient (ADC) values were measured in muscles, urinary bladder, lymph nodes and lesions. By the use of TX, signal homogeneity was "improved" in 25/40 and "equal" in 15/40 cases. Fat suppression was "improved" in 17/40 and "equal" in 23/40 cases. These improvements were statistically significant (p<0.001, Wilcoxon signed-rank test). In five patients, fluid-related dielectric shading was present, which improved remarkably. The ADC values did not significantly differ for the two RF excitation methods (p=0.630 over all data, pairwise Student's t-test). Dual-source parallel RF excitation improved image quality of DWIBS at 3.0 T with respect to signal homogeneity and fat suppression, reduced scan time by approximately one-third, and did not influence the measured ADC values. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. State transition storyboards: A tool for designing the Goldstone solar system radar data acquisition system user interface software

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1987-01-01

    Effective user interface design in software systems is a complex task that takes place without adequate modeling tools. By combining state transition diagrams and the storyboard technique of filmmakers, State Transition Storyboards were developed to provide a detailed modeling technique for the Goldstone Solar System Radar Data Acquisition System human-machine interface. Illustrations are included with a description of the modeling technique.

  14. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Simultaneous orthogonal plane imaging.

    PubMed

    Mickevicius, Nikolai J; Paulson, Eric S

    2017-11-01

    Intrafraction motion can result in a smearing of planned external beam radiation therapy dose distributions, resulting in an uncertainty in dose actually deposited in tissue. The purpose of this paper is to present a pulse sequence that is capable of imaging a moving target at a high frame rate in two orthogonal planes simultaneously for MR-guided radiotherapy. By balancing the zero gradient moment on all axes, slices in two orthogonal planes may be spatially encoded simultaneously. The orthogonal slice groups may be acquired with equal or nonequal echo times. A Cartesian spoiled gradient echo simultaneous orthogonal plane imaging (SOPI) sequence was tested in phantom and in vivo. Multiplexed SOPI acquisitions were performed in which two parallel slices were imaged along two orthogonal axes simultaneously. An autocalibrating phase-constrained 2D-SENSE-GRAPPA (generalized autocalibrating partially parallel acquisition) algorithm was implemented to reconstruct the multiplexed data. SOPI images without intraslice motion artifacts were reconstructed at a maximum frame rate of 8.16 Hz. The 2D-SENSE-GRAPPA reconstruction separated the parallel slices aliased along each orthogonal axis. The high spatiotemporal resolution provided by SOPI has the potential to be beneficial for intrafraction motion management during MR-guided radiation therapy or other MRI-guided interventions. Magn Reson Med 78:1700-1710, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  17. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  18. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  19. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a PH collimator with similar spatial resolution for myocardial SPECT imaging.

  1. 48 CFR 15.100 - Scope of subpart.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Scope of subpart. 15.100 Section 15.100 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 15.100 Scope...

  2. Federal Library Programs for Acquisition of Foreign Materials.

    ERIC Educational Resources Information Center

    Cylke, Frank Kurt

    Sixteen libraries representing those agencies holding membership on the Federal Library Committee were surveyed to determine library foreign language or imprint holdings, acquisitions techniques, procedures and/or problems. Specific questions, relating to holdings, staff, budget and the acquisition, processing, reference and translation services…

  3. 48 CFR 15.101 - Best value continuum.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....101 Section 15.101 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 15.101... cost or price may vary. For example, in acquisitions where the requirement is clearly definable and the...

  4. Acquisition Systems Protection Planning the Manhatten Project: A Case Study

    DTIC Science & Technology

    1994-06-03

    This study examines the counterintelligence and security programs of the Manhattan Project , the United States acquisition of the atomic bomb, using...assessment methodology and counterintelligence techniques and procedures. Acquisition systems, Program protection, Manhattan Project , Atomic bomb, Technology protection, Counterintelligence, Security.

  5. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  6. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor

    PubMed Central

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-01-01

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194

  7. 3D acquisition and modeling for flint artefacts analysis

    NASA Astrophysics Data System (ADS)

    Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.

    2007-07-01

    In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.

  8. Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.

    1996-12-31

    Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximizationmore » (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts.« less

  9. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  10. DART -- Data acquisition for the next generation of Fermilab fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Anderson, J.; Appleton, L.

    1994-02-01

    DART is the name of the data acquisition effort for Fermilab experiments taking data in the '94--'95 time frame and beyond. Its charge is to provide a common system of hardware and software, which can be easily configured and extended to meet the wide range of data acquisition requirements of the experiments. Its strategy is to provide incrementally functional data acquisition systems to the experiments at frequent intervals to support the ongoing DA activities of the experiments. DART is a collaborative development effort between the experimenters and the Fermilab Computing Division. Experiments collaborating in DART cover a range of requirementsmore » from 400 Kbytes/sec event readout using a single DA processor, to 200 Mbytes/sec event readout involving 10 parallel readout streams, 10 VME event building planes and greater than 1,000 MIPs of event filter processing. The authors describe the requirements, architecture, and plans for the project and report on its current status.« less

  11. MR techniques for guiding high-intensity focused ultrasound (HIFU) treatments.

    PubMed

    Kuroda, Kagayaki

    2018-02-01

    To make full use of the ability of magnetic resonance (MR) to guide high-intensity focused ultrasound (HIFU) treatment, effort has been made to improve techniques for thermometry, motion tracking, and sound beam visualization. For monitoring rapid temperature elevation with proton resonance frequency (PRF) shift, data acquisition and processing can be accelerated with parallel imaging and/or sparse sampling in conjunction with appropriate signal processing methods. Thermometry should be robust against tissue motion, motion-induced magnetic field variation, and susceptibility change. Thus, multibaseline, referenceless, or hybrid techniques have become important. In cases with adipose or bony tissues, for which PRF shift cannot be used, thermometry with relaxation times or signal intensity may be utilized. Motion tracking is crucial not only for thermometry but also for targeting the focus of an ultrasound in moving organs such as the liver, kidney, or heart. Various techniques for motion tracking, such as those based on an anatomical image atlas with optical-flow displacement detection, a navigator echo to seize the diaphragm position, and/or rapid imaging to track vessel positions, have been proposed. Techniques for avoiding the ribcage and near-field heating have also been examined. MR acoustic radiation force imaging (MR-ARFI) is an alternative to thermometry that can identify the location and shape of the focal spot and sound beam path. This technique could be useful for treating heterogeneous tissue regions or performing transcranial therapy. All of these developments, which will be discussed further in this review, expand the applicability of HIFU treatments to a variety of clinical targets while maintaining safety and precision. 2 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2018;47:316-331. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Plasma potential and electron temperature evaluated by ball-pen and Langmuir probes in the COMPASS tokamak

    NASA Astrophysics Data System (ADS)

    Dimitrova, M.; Popov, Tsv K.; Adamek, J.; Kovačič, J.; Ivanova, P.; Hasan, E.; López-Bruna, D.; Seidl, J.; Vondráček, P.; Dejarnac, R.; Stöckel, J.; Imríšek, M.; Panek, R.; the COMPASS Team

    2017-12-01

    The radial distributions of the main plasma parameters in the scrape-off-layer of the COMPASS tokamak are measured during L-mode and H-mode regimes by using both Langmuir and ball-pen probes mounted on a horizontal reciprocating manipulator. The radial profile of the plasma potential derived previously from Langmuir probes data by using the first derivative probe technique is compared with data derived using ball-pen probes. A good agreement can be seen between the data acquired by the two techniques during the L-mode discharge and during the H-mode regime within the inter-ELM periods. In contrast with the first derivative probe technique, the ball-pen probe technique does not require a swept voltage and, therefore, the temporal resolution is only limited by the data acquisition system. In the electron temperature evaluation, in the far scrape-off layer and in the limiter shadow, where the electron energy distribution is Maxwellian, the results from both techniques match well. In the vicinity of the last closed flux surface, where the electron energy distribution function is bi-Maxwellian, the ball-pen probe technique results are in agreement with the high-temperature components of the electron distribution only. We also discuss the application of relatively large Langmuir probes placed in parallel and perpendicularly to the magnetic field lines to studying the main plasma parameters. The results obtained by the two types of the large probes agree well. They are compared with Thomson scattering data for electron temperatures and densities. The results for the electron densities are compared also with the results from ASTRA code calculation of the electron source due to the ionization of the neutrals by fast electrons and the origin of the bi-Maxwellian electron energy distribution function is briefly discussed.

  13. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  14. 48 CFR 15.101-1 - Tradeoff process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Tradeoff process. 15.101-1 Section 15.101-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 15.101-1...

  15. Evaluation of usefulness of 3D views for clinical photography.

    PubMed

    Jinnin, Masatoshi; Fukushima, Satoshi; Masuguchi, Shinichi; Tanaka, Hiroki; Kawashita, Yoshio; Ishihara, Tsuyoshi; Ihn, Hironobu

    2011-01-01

    This is the first report investigating the usefulness of a 3D viewing technique (parallel viewing and cross-eyed viewing) for presenting clinical photography. Using the technique, we can grasp 3D structure of various lesions (e.g. tumors, wounds) or surgical procedures (e.g. lymph node dissection, flap) much more easily even without any cost and optical aids compared to 2D photos. Most recently 3D cameras started to be commercially available, but they may not be useful for presentation in scientific papers or poster sessions. To create a stereogram, two different pictures were taken from the right and left eye views using a digital camera. Then, the two pictures were placed next to one another. Using 9 stereograms, we performed a questionnaire-based survey. Our survey revealed 57.7% of the doctors/students had acquired the 3D viewing technique and an additional 15.4% could learn parallel viewing with 10 minutes training. Among the subjects capable of 3D views, 73.7% used the parallel view technique whereas only 26.3% chose the cross-eyed view. There was no significant difference in the results of the questionnaire about the efficiency and usefulness of 3D views between parallel view users and cross-eyed users. Almost all subjects (94.7%) answered that the technique is useful. Lesions with multiple undulations are a good application. 3D views, especially parallel viewing, are likely to be common and easy enough to consider for practical use in doctors/students. The wide use of the technique may revolutionize presentation of clinical pictures in meetings, educational lectures, or manuscripts.

  16. Development of Object Permanence in Visually Impaired Infants.

    ERIC Educational Resources Information Center

    Rogers, S. J.; Puchalski, C. B.

    1988-01-01

    Development of object permanence skills was examined longitudinally in 20 visually impaired infants (ages 4-25 months). Order of skill acquisition and span of time required to master skills paralleled that of sighted infants, but the visually impaired subjects were 8-12 months older than sighted counterparts when similar skills were acquired.…

  17. Suggestopedia and Soviet Sleep-Learning.

    ERIC Educational Resources Information Center

    Bancroft, W. Jane

    This paper examines the parallels between suggestopedia and Soviet sleep-learning for learning foreign languages. Both systems are based on the idea that the acquisition of information can occur in states below the optimal level of consciousness. Hypnopedia makes use of the period of paradoxical or light sleep that usually occurs just as one is…

  18. The Development of a Bilingual Vocabulary Measure for Armenian-English Children

    ERIC Educational Resources Information Center

    Hovsepian, Alice

    2017-01-01

    The purpose of this study was to develop a parallel bilingual vocabulary measure for the comparative study of receptive and expressive vocabulary growth in young Armenian-English bilinguals. The measure was comprised of four independent vocabulary lists equivalent on age of acquisition ratings. The lists were counterbalanced across four tasks,…

  19. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  1. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  2. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    NASA Astrophysics Data System (ADS)

    Sarwi, S.; Supardi, K. I.; Linuwih, S.

    2017-04-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory.

  3. [QUIPS: quality improvement in postoperative pain management].

    PubMed

    Meissner, Winfried

    2011-01-01

    Despite the availability of high-quality guidelines and advanced pain management techniques acute postoperative pain management is still far from being satisfactory. The QUIPS (Quality Improvement in Postoperative Pain Management) project aims to improve treatment quality by means of standardised data acquisition, analysis of quality and process indicators, and feedback and benchmarking. During a pilot phase funded by the German Ministry of Health (BMG), a total of 12,389 data sets were collected from six participating hospitals. Outcome improved in four of the six hospitals. Process indicators, such as routine pain documentation, were only poorly correlated with outcomes. To date, more than 130 German hospitals use QUIPS as a routine quality management tool. An EC-funded parallel project disseminates the concept internationally. QUIPS demonstrates that patient-reported outcomes in postoperative pain management can be benchmarked in routine clinical practice. Quality improvement initiatives should use outcome instead of structural and process parameters. The concept is transferable to other fields of medicine. Copyright © 2011. Published by Elsevier GmbH.

  4. 3D-resolved fluorescence and phosphorescence lifetime imaging using temporal focusing wide-field two-photon excitation

    PubMed Central

    Choi, Heejin; Tzeranis, Dimitrios S.; Cha, Jae Won; Clémenceau, Philippe; de Jong, Sander J. G.; van Geest, Lambertus K.; Moon, Joong Ho; Yannas, Ioannis V.; So, Peter T. C.

    2012-01-01

    Fluorescence and phosphorescence lifetime imaging are powerful techniques for studying intracellular protein interactions and for diagnosing tissue pathophysiology. While lifetime-resolved microscopy has long been in the repertoire of the biophotonics community, current implementations fall short in terms of simultaneously providing 3D resolution, high throughput, and good tissue penetration. This report describes a new highly efficient lifetime-resolved imaging method that combines temporal focusing wide-field multiphoton excitation and simultaneous acquisition of lifetime information in frequency domain using a nanosecond gated imager from a 3D-resolved plane. This approach is scalable allowing fast volumetric imaging limited only by the available laser peak power. The accuracy and performance of the proposed method is demonstrated in several imaging studies important for understanding peripheral nerve regeneration processes. Most importantly, the parallelism of this approach may enhance the imaging speed of long lifetime processes such as phosphorescence by several orders of magnitude. PMID:23187477

  5. Analysis on the use of Multi-Sequence MRI Series for Segmentation of Abdominal Organs

    NASA Astrophysics Data System (ADS)

    Selver, M. A.; Selvi, E.; Kavur, E.; Dicle, O.

    2015-01-01

    Segmentation of abdominal organs from MRI data sets is a challenging task due to various limitations and artefacts. During the routine clinical practice, radiologists use multiple MR sequences in order to analyze different anatomical properties. These sequences have different characteristics in terms of acquisition parameters (such as contrast mechanisms and pulse sequence designs) and image properties (such as pixel spacing, slice thicknesses and dynamic range). For a complete understanding of the data, computational techniques should combine the information coming from these various MRI sequences. These sequences are not acquired in parallel but in a sequential manner (one after another). Therefore, patient movements and respiratory motions change the position and shape of the abdominal organs. In this study, the amount of these effects is measured using three different symmetric surface distance metrics performed to three dimensional data acquired from various MRI sequences. The results are compared to intra and inter observer differences and discussions on using multiple MRI sequences for segmentation and the necessities for registration are presented.

  6. Turboprop+: enhanced Turboprop diffusion-weighted imaging with a new phase correction.

    PubMed

    Lee, Chu-Yu; Li, Zhiqiang; Pipe, James G; Debbins, Josef P

    2013-08-01

    Faster periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) diffusion-weighted imaging acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor (number of gradient echoes between each pair of radiofrequency refocusing pulses) and, thus, scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance-related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate and short scan time can be achieved in Turboprop+ using a high turbo factor, whereas off-resonance related artifacts are minimized. © 2012 Wiley Periodicals, Inc.

  7. Compaction trends of full stiffness tensor and fluid permeability in artificial shales

    NASA Astrophysics Data System (ADS)

    Beloborodov, Roman; Pervukhina, Marina; Lebedev, Maxim

    2018-03-01

    We present a methodology and describe a set-up that allows simultaneous acquisition of all five elastic coefficients of a transversely isotropic (TI) medium and its permeability in the direction parallel to the symmetry axis during mechanical compaction experiments. We apply the approach to synthetic shale samples and investigate the role of composition and applied stress on their elastic and transport properties. Compaction trends for the five elastic coefficients that fully characterize TI anisotropy of artificial shales are obtained for a porosity range from 40 per cent to 15 per cent. A linear increase of elastic coefficients with decreasing porosity is observed. The permeability acquired with the pressure-oscillation technique exhibits exponential decrease with decreasing porosity. Strong correlations are observed between an axial fluid permeability and seismic attributes, namely, VP/VS ratio and acoustic impedance, measured in the same direction. These correlations might be used to derive permeability of shales from seismic data given that their mineralogical composition is known.

  8. Three-dimensional laser microvision.

    PubMed

    Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y

    2001-04-10

    A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.

  9. CMOS Rad-Hard Front-End Electronics for Precise Sensors Measurements

    NASA Astrophysics Data System (ADS)

    Sordo-Ibáñez, Samuel; Piñero-García, Blanca; Muñoz-Díaz, Manuel; Ragel-Morales, Antonio; Ceballos-Cáceres, Joaquín; Carranza-González, Luis; Espejo-Meana, Servando; Arias-Drake, Alberto; Ramos-Martos, Juan; Mora-Gutiérrez, José Miguel; Lagos-Florido, Miguel Angel

    2016-08-01

    This paper reports a single-chip solution for the implementation of radiation-tolerant CMOS front-end electronics (FEE) for applications requiring the acquisition of base-band sensor signals. The FEE has been designed in a 0.35μm CMOS process, and implements a set of parallel conversion channels with high levels of configurability to adapt the resolution, conversion rate, as well as the dynamic input range for the required application. Each conversion channel has been designed with a fully-differential implementation of a configurable-gain instrumentation amplifier, followed by an also configurable dual-slope ADC (DS ADC) up to 16 bits. The ASIC also incorporates precise thermal monitoring, sensor conditioning and error detection functionalities to ensure proper operation in extreme environments. Experimental results confirm that the proposed topologies, in conjunction with the applied radiation-hardening techniques, are reliable enough to be used without loss in the performance in environments with an extended temperature range (between -25 and 125 °C) and a total dose beyond 300 krad.

  10. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  12. Laser-induced photo emission detection: data acquisition based on light intensity counting

    NASA Astrophysics Data System (ADS)

    Yulianto, N.; Yudasari, N.; Putri, K. Y.

    2017-04-01

    Laser Induced Breakdown Detection (LIBD) is one of the quantification techniques for colloids. There are two ways of detection in LIBD: optical detection and acoustic detection. LIBD is based on the detection of plasma emission due to the interaction between particle and laser beam. In this research, the changing of light intensity during plasma formations was detected by a photodiode sensor. A photo emission data acquisition system was built to collect and transform them into digital counts. The real-time system used data acquisition device National Instrument DAQ 6009 and LABVIEW software. The system has been tested on distilled water and tap water samples. The result showed 99.8% accuracy by using counting technique in comparison to the acoustic detection with sample rate of 10 Hz, thus the acquisition system can be applied as an alternative method to the existing LIBD acquisition system.

  13. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  14. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  15. Reengineering the Acquisition/Procurement Process: A Methodology for Requirements Collection

    NASA Technical Reports Server (NTRS)

    Taylor, Randall; Vanek, Thomas

    2011-01-01

    This paper captures the systematic approach taken by JPL's Acquisition Reengineering Project team, the methodology used, challenges faced, and lessons learned. It provides pragmatic "how-to" techniques and tools for collecting requirements and for identifying areas of improvement in an acquisition/procurement process or other core process of interest.

  16. 76 FR 35424 - Information Collection Requirement; Defense Federal Acquisition Regulation Supplement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-17

    ...; Defense Federal Acquisition Regulation Supplement; Acquisition of Information Technology AGENCY: Defense... techniques or other forms of information technology. The Office of Management and Budget (OMB) has approved... Information Technology, and the associated clauses at DFARS 252.239-7000 and 252.239-7006; OMB Control Number...

  17. Double dissociation of pharmacologically induced deficits in visual recognition and visual discrimination learning

    PubMed Central

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-01-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits. PMID:18685146

  18. Double dissociation of pharmacologically induced deficits in visual recognition and visual discrimination learning.

    PubMed

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-08-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits.

  19. Data Acquisition System for Silicon Ultra Fast Cameras for Electron and Gamma Sources in Medical Applications (sucima Imager)

    NASA Astrophysics Data System (ADS)

    Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.

    2004-07-01

    The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.

  20. Multiplex mass spectrometry imaging for latent fingerprints.

    PubMed

    Yagnik, Gargey B; Korte, Andrew R; Lee, Young Jin

    2013-01-01

    We have previously developed in-parallel data acquisition of orbitrap mass spectrometry (MS) and ion trap MS and/or MS/MS scans for matrix-assisted laser desorption/ionization MS imaging (MSI) to obtain rich chemical information in less data acquisition time. In the present study, we demonstrate a novel application of this multiplex MSI methodology for latent fingerprints. In a single imaging experiment, we could obtain chemical images of various endogenous and exogenous compounds, along with simultaneous MS/MS images of a few selected compounds. This work confirms the usefulness of multiplex MSI to explore chemical markers when the sample specimen is very limited. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Videogrammetric Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Liu, Tian-Shu

    2001-01-01

    The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.

  2. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    NASA Technical Reports Server (NTRS)

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  3. Rapid viscosity measurements of powdered thermosetting resins

    NASA Technical Reports Server (NTRS)

    Price, H. L.; Burks, H. D.; Dalal, S. K.

    1978-01-01

    A rapid and inexpensive method of obtaining processing-related data on powdered thermosetting resins has been investigated. The method involved viscosity measurements obtained with a small specimen (less than 100 mg) parallel plate plastometer. A data acquisition and reduction system was developed which provided a value of viscosity and strain rate about 12-13 second intervals during a test. The effects of specimen compaction pressure and reduction of adhesion between specimen and parallel plates were examined. The plastometer was used to measure some processing-related viscosity changes of an addition polyimide resin, including changes caused by pre-test heat treatment, test temperature, and strain rate.

  4. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  5. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  6. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  7. Myocardial tagging by Cardiovascular Magnetic Resonance: evolution of techniques--pulse sequences, analysis algorithms, and applications

    PubMed Central

    2011-01-01

    Cardiovascular magnetic resonance (CMR) tagging has been established as an essential technique for measuring regional myocardial function. It allows quantification of local intramyocardial motion measures, e.g. strain and strain rate. The invention of CMR tagging came in the late eighties, where the technique allowed for the first time for visualizing transmural myocardial movement without having to implant physical markers. This new idea opened the door for a series of developments and improvements that continue up to the present time. Different tagging techniques are currently available that are more extensive, improved, and sophisticated than they were twenty years ago. Each of these techniques has different versions for improved resolution, signal-to-noise ratio (SNR), scan time, anatomical coverage, three-dimensional capability, and image quality. The tagging techniques covered in this article can be broadly divided into two main categories: 1) Basic techniques, which include magnetization saturation, spatial modulation of magnetization (SPAMM), delay alternating with nutations for tailored excitation (DANTE), and complementary SPAMM (CSPAMM); and 2) Advanced techniques, which include harmonic phase (HARP), displacement encoding with stimulated echoes (DENSE), and strain encoding (SENC). Although most of these techniques were developed by separate groups and evolved from different backgrounds, they are in fact closely related to each other, and they can be interpreted from more than one perspective. Some of these techniques even followed parallel paths of developments, as illustrated in the article. As each technique has its own advantages, some efforts have been made to combine different techniques together for improved image quality or composite information acquisition. In this review, different developments in pulse sequences and related image processing techniques are described along with the necessities that led to their invention, which makes this article easy to read and the covered techniques easy to follow. Major studies that applied CMR tagging for studying myocardial mechanics are also summarized. Finally, the current article includes a plethora of ideas and techniques with over 300 references that motivate the reader to think about the future of CMR tagging. PMID:21798021

  8. "One-Stop Shop": Free-Breathing Dynamic Contrast-Enhanced Magnetic Resonance Imaging of the Kidney Using Iterative Reconstruction and Continuous Golden-Angle Radial Sampling.

    PubMed

    Riffel, Philipp; Zoellner, Frank G; Budjan, Johannes; Grimm, Robert; Block, Tobias K; Schoenberg, Stefan O; Hausmann, Daniel

    2016-11-01

    The purpose of the present study was to evaluate a recently introduced technique for free-breathing dynamic contrast-enhanced renal magnetic resonance imaging (MRI) applying a combination of radial k-space sampling, parallel imaging, and compressed sensing. The technique allows retrospective reconstruction of 2 motion-suppressed sets of images from the same acquisition: one with lower temporal resolution but improved image quality for subjective image analysis, and one with high temporal resolution for quantitative perfusion analysis. In this study, 25 patients underwent a kidney examination, including a prototypical fat-suppressed, golden-angle radial stack-of-stars T1-weighted 3-dimensional spoiled gradient-echo examination (GRASP) performed after contrast agent administration during free breathing. Images were reconstructed at temporal resolutions of 55 spokes per frame (6.2 seconds) and 13 spokes per frame (1.5 seconds). The GRASP images were evaluated by 2 blinded radiologists. First, the reconstructions with low temporal resolution underwent subjective image analysis: the radiologists assessed the best arterial phase and the best renal phase and rated image quality score for each patient on a 5-point Likert-type scale.In addition, the diagnostic confidence was rated according to a 3-point Likert-type scale. Similarly, respiratory motion artifacts and streak artifacts were rated according to a 3-point Likert-type scale.Then, the reconstructions with high temporal resolution were analyzed with a voxel-by-voxel deconvolution approach to determine the renal plasma flow, and the results were compared with values reported in previous literature. Reader 1 and reader 2 rated the overall image quality score for the best arterial phase and the best renal phase with a median image quality score of 4 (good image quality) for both phases, respectively. A high diagnostic confidence (median score of 3) was observed. There were no respiratory motion artifacts in any of the patients. Streak artifacts were present in all of the patients, but did not compromise diagnostic image quality.The estimated renal plasma flow was slightly higher (295 ± 78 mL/100 mL per minute) than reported in previous MRI-based studies, but also closer to the physiologically expected value. Dynamic, motion-suppressed contrast-enhanced renal MRI can be performed in high diagnostic quality during free breathing using a combination of golden-angle radial sampling, parallel imaging, and compressed sensing. Both morphologic and quantitative functional information can be acquired within a single acquisition.

  9. Localized high-resolution DTI of the human midbrain using single-shot EPI, parallel imaging, and outer-volume suppression at 7 T

    PubMed Central

    Wargo, Christopher J.; Gore, John C.

    2013-01-01

    Localized high-resolution diffusion tensor images (DTI) from the midbrain were obtained using reduced field-of-view (rFOV) methods combined with SENSE parallel imaging and single-shot echo planar (EPI) acquisitions at 7 T. This combination aimed to diminish sensitivities of DTI to motion, susceptibility variations, and EPI artifacts at ultra-high field. Outer-volume suppression (OVS) was applied in DTI acquisitions at 2- and 1-mm2 resolutions, b=1000 s/mm2, and six diffusion directions, resulting in scans of 7- and 14-min durations. Mean apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values were measured in various fiber tract locations at the two resolutions and compared. Geometric distortion and signal-to-noise ratio (SNR) were additionally measured and compared for reduced-FOV and full-FOV DTI scans. Up to an eight-fold data reduction was achieved using DTI-OVS with SENSE at 1 mm2, and geometric distortion was halved. The localization of fiber tracts was improved, enabling targeted FA and ADC measurements. Significant differences in diffusion properties were observed between resolutions for a number of regions suggesting that FA values are impacted by partial volume effects even at a 2-mm2 resolution. The combined SENSE DTI-OVS approach allows large reductions in DTI data acquisition and provides improved quality for high-resolution diffusion studies of the human brain. PMID:23541390

  10. A Data Acquisition Parallel Bus for Wind Tunnels at ARL (Aeronautical Research Laboratory).

    DTIC Science & Technology

    1989-08-01

    I’TV F.E AROPY62 ARL-FLIGHT-MECH-TM-412 AR-005-629 NN 0 ( N1 DEPARTMENT OF DEFENCE I DEFENCE SCIENCE AND TECHNOLOGY ORGANISATION AERONAUTICAL RESEARCH...Library SPARES (10 copies) TOTAL (73 copies) AL~ 140 DEPRTENT OF DEEC P-AGE CLASSIFICATION DOCUMENT CONTROL DATA UNCLASSIFIED PRIVACY MARING 1.. AR

  11. The Acquisition of Pronouns by French Children: A Parallel Study of Production and Comprehension

    ERIC Educational Resources Information Center

    Zesiger, Pascal; Zesiger, Laurence Chillier; Arabatzi, Marina; Baranzini, Lara; Cronel-Ohayon, Stephany; Franck, Julie; Frauenfelder, Ulrich Hans; Hamann, Cornelia; Rizzi, Luigi

    2010-01-01

    This study examines syntactic and morphological aspects of the production and comprehension of pronouns by 99 typically developing French-speaking children aged 3 years, 5 months to 6 years, 5 months. A fine structural analysis of subject, object, and reflexive clitics suggests that whereas the object clitic chain crosses the subject chain, the…

  12. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    ERIC Educational Resources Information Center

    Liu, Ran; Holt, Lori L.

    2011-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by…

  13. The Effect of Science Activities on Concept Acquisition of Age 5-6 Children Groups

    ERIC Educational Resources Information Center

    Dogru, Mustafa; Seker, Fatih

    2012-01-01

    Present research aims to determine the effect of science activities on concept development of preschool period age 5-6 children groups. Parallel to research objective, qualitative research pattern has been the selected method. Study group comprises of collectively 48 children from 5-6 age group attending to a private education institution in city…

  14. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  15. Kalman Filter Tracking on Parallel Architectures

    NASA Astrophysics Data System (ADS)

    Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi

    2016-11-01

    Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.

  16. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  17. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  18. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  19. A unified framework for building high performance DVEs

    NASA Astrophysics Data System (ADS)

    Lei, Kaibin; Ma, Zhixia; Xiong, Hua

    2011-10-01

    A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.

  20. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  1. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  2. Fast experiments for structure elucidation of small molecules: Hadamard NMR with multiple receivers.

    PubMed

    Gierth, Peter; Codina, Anna; Schumann, Frank; Kovacs, Helena; Kupče, Ēriks

    2015-11-01

    We propose several significant improvements to the PANSY (Parallel NMR SpectroscopY) experiments-PANSY COSY and PANSY-TOCSY. The improved versions of these experiments provide sufficient spectral information for structure elucidation of small organic molecules from just two 2D experiments. The PANSY-TOCSY-Q experiment has been modified to allow for simultaneous acquisition of three different types of NMR spectra-1D C-13 of non-protonated carbon sites, 2D TOCSY and multiplicity edited 2D HETCOR. In addition the J-filtered 2D PANSY-gCOSY experiment records a 2D HH gCOSY spectrum in parallel with a (1) J-filtered HC long-range HETCOR spectrum as well as offers a simplified data processing. In addition to parallel acquisition, further time savings are feasible because of significantly smaller F1 spectral windows as compared to the indirect detection experiments. Use of cryoprobes and multiple receivers can significantly alleviate the sensitivity issues that are usually associated with the so called direct detection experiments. In cases where experiments are sampling limited rather than sensitivity limited further reduction of experiment time is achieved by using Hadamard encoding. In favorable cases the total recording time for the two PANSY experiments can be reduced to just 40 s. The proposed PANSY experiments provide sufficient information to allow the CMCse software package (Bruker) to solve structures of small organic molecules. Copyright © 2015 John Wiley & Sons, Ltd.

  3. High temporal and high spatial resolution MR angiography (4D-MRA).

    PubMed

    Hadizadeh, D R; Marx, C; Gieseke, J; Schild, H H; Willinek, W A

    2014-09-01

    In the first decade of the twenty-first century, whole-body magnetic resonance scanners with high field strengths (and thus potentially better signal-to-noise ratios) were developed. At the same time, parallel imaging and "echo-sharing" techniques were refined to allow for increasingly high spatial and temporal resolution in dynamic magnetic resonance angiography ("time-resolved" = TR-MRA). This technological progress facilitated tracking the passage of intra-venously administered contrast agent boluses as well as the acquisition of volume data sets at high image refresh rates ("4D-MRA"). This opened doors for many new applications in non-invasive vascular imaging, including simultaneous anatomic and functional analysis of many vascular pathologies including arterio-venous malformations. Different methods were established to acquire 4D-MRA using various strategies to acquire k-space trajectories over time in order to optimize imaging according to clinical needs. These include "keyhole"-based techniques (e. g. 4D-TRAK), TRICKS - both with and without projection - and HYPR-reconstruction, TREAT, and TWIST. Some of these techniques were first introduced in the 1980 s and 1990 s, were later enhanced and modified, and finally implemented in the products of major vendors. In the last decade, a large number of studies on the clinical applications of TR-MRA was published. This manuscript provides an overview of the development of TR-MRA methods and the 4D-MRA techniques as they are currently used in the diagnosis, treatment and follow-up of vascular diseases in various parts of the body. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Comparison of two structured illumination techniques based on different 3D illumination patterns

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Patwary, N.; Doblas, A.; Saavedra, G.; Preza, C.

    2017-02-01

    Manipulating the excitation pattern in optical microscopy has led to several super-resolution techniques. Among different patterns, the lateral sinusoidal excitation was used for the first demonstration of structured illumination microscopy (SIM), which provides the fastest SIM acquisition system (based on the number of raw images required) compared to the multi-spot illumination approach. Moreover, 3D patterns that include lateral and axial variations in the illumination have attracted more attention recently as they address resolution enhancement in three dimensions. A threewave (3W) interference technique based on coherent illumination has already been shown to provide super-resolution and optical sectioning in 3D-SIM. In this paper, we investigate a novel tunable technique that creates a 3D pattern from a set of multiple incoherently illuminated parallel slits that act as light sources for a Fresnel biprism. This setup is able to modulate the illumination pattern in the object space both axially and laterally with adjustable modulation frequencies. The 3D forward model for the new system is developed here to consider the effect of the axial modulation due to the 3D patterned illumination. The performance of 3D-SIM based on 3W interference and the tunable system are investigated in simulation and compared based on two different criteria. First, restored images obtained for both 3D-SIM systems using a generalized Wiener filter are compared to determine the effect of the illumination pattern on the reconstruction. Second, the effective frequency response of both systems is studied to determine the axial and lateral resolution enhancement that is obtained in each case.

  5. High-efficiency dynamic routing architecture for the readout of single photon avalanche diode arrays in time-correlated measurements

    NASA Astrophysics Data System (ADS)

    Cominelli, A.; Acconcia, G.; Peronio, P.; Rech, I.; Ghioni, M.

    2017-05-01

    In recent years, the Time-Correlated Single Photon Counting (TCSPC) technique has gained a prominent role in many fields, where the analysis of extremely fast and faint luminous signals is required. In the life science, for instance, the estimation of fluorescence time-constants with picosecond accuracy has been leading to a deeper insight into many biological processes. Although the many advantages provided by TCSPC-based techniques, their intrinsically repetitive nature leads to a relatively long acquisition time, especially when time-resolved images are obtained by means of a single detector, along with a scanning point system. In the last decade, TCSPC acquisition systems have been subjected to a fast trend towards the parallelization of many independent channels, in order to speed up the measure. On one hand, some high-performance multi-module systems have been already made commercially available, but high area and power consumption of each module have limited the number of channels to only some units. On the other hand, many compact systems based on Single Photon Avalanche Diodes (SPAD) have been proposed in literature, featuring thousands of independent acquisition chains on a single chip. The integration of both detectors and conversion electronic in the same pixel area, though, has imposed tight constraints on power dissipation and area occupation of the electronics, resulting in a tradeoff with performance, both in terms of differential nonlinearity and timing jitter. Furthermore, in the ideal case of simultaneous readout of a huge number of channels, the overall data rate can be as high as 100 Gbit/s, which is nowadays too high to be easily processed in real time by a PC. Typical adopted solutions involve an arbitrary dwell time, followed by a sequential readout of the converters, thus limiting the maximum operating frequency of each channel and impairing the measurement speed, which still lies well below the limit imposed by the saturation of the transfer rate towards the elaboration unit. We developed a novel readout architecture, starting from a completely different perspective: considering the maximum data rate we can manage with a PC, a limited set of conversion data is selected and transferred to the elaboration unit during each excitation period, in order to take full advantage of the bus bandwidth toward the PC. In particular, we introduce a smart routing logic, able to dynamically connect a large number of SPAD detectors to a limited set of high-performance external acquisition chains, paving the way for a more efficient use of resources and allowing us to effectively break the tradeoff between integration and performance, which affects the solutions proposed so far. The routing electronic features a pixelated architecture, while 3D-stacking techniques are exploited to connect each SPAD to its dedicated electronic, leading to a minimization of the overall number of interconnections crossing the integrated system, which is one of the main issues in high-density arrays.

  6. Increasing the perceptual salience of relationships in parallel coordinate plots.

    PubMed

    Harter, Jonathan M; Wu, Xunlei; Alabi, Oluwafemi S; Phadke, Madhura; Pinto, Lifford; Dougherty, Daniel; Petersen, Hannah; Bass, Steffen; Taylor, Russell M

    2012-01-01

    We present three extensions to parallel coordinates that increase the perceptual salience of relationships between axes in multivariate data sets: (1) luminance modulation maintains the ability to preattentively detect patterns in the presence of overplotting, (2) adding a one-vs.-all variable display highlights relationships between one variable and all others, and (3) adding a scatter plot within the parallel-coordinates display preattentively highlights clusters and spatial layouts without strongly interfering with the parallel-coordinates display. These techniques can be combined with one another and with existing extensions to parallel coordinates, and two of them generalize beyond cases with known-important axes. We applied these techniques to two real-world data sets (relativistic heavy-ion collision hydrodynamics and weather observations with statistical principal component analysis) as well as the popular car data set. We present relationships discovered in the data sets using these methods.

  7. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  8. Comparison of diffusion-weighted MRI acquisition techniques for normal pancreas at 3.0 Tesla.

    PubMed

    Yao, Xiu-Zhong; Kuang, Tiantao; Wu, Li; Feng, Hao; Liu, Hao; Cheng, Wei-Zhong; Rao, Sheng-Xiang; Wang, He; Zeng, Meng-Su

    2014-01-01

    We aimed to optimize diffusion-weighted imaging (DWI) acquisitions for normal pancreas at 3.0 Tesla. Thirty healthy volunteers were examined using four DWI acquisition techniques with b values of 0 and 600 s/mm2 at 3.0 Tesla, including breath-hold DWI, respiratory-triggered DWI, respiratory-triggered DWI with inversion recovery (IR), and free-breathing DWI with IR. Artifacts, signal-to-noise ratio (SNR) and apparent diffusion coefficient (ADC) of normal pancreas were statistically evaluated among different DWI acquisitions. Statistical differences were noticed in artifacts, SNR, and ADC values of normal pancreas among different DWI acquisitions by ANOVA (P <0.001). Normal pancreas imaging had the lowest artifact in respiratory-triggered DWI with IR, the highest SNR in respiratory-triggered DWI, and the highest ADC value in free-breathing DWI with IR. The head, body, and tail of normal pancreas had statistically different ADC values on each DWI acquisition by ANOVA (P < 0.05). The highest image quality for normal pancreas was obtained using respiratory-triggered DWI with IR. Normal pancreas displayed inhomogeneous ADC values along the head, body, and tail structures.

  9. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  10. Parallel computing in experimental mechanics and optical measurement: A review (II)

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  11. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  12. Recent Advances in 3D Time-Resolved Contrast-Enhanced MR Angiography

    PubMed Central

    Riederer, Stephen J.; Haider, Clifton R.; Borisch, Eric A.; Weavers, Paul T.; Young, Phillip M.

    2015-01-01

    Contrast-enhanced MR angiography (CE-MRA) was first introduced for clinical studies approximately 20 years ago. Early work provided 3 to 4 mm spatial resolution with acquisition times in the 30 sec range. Since that time there has been continuing effort to provide improved spatial resolution with reduced acquisition time, allowing high resolution three-dimensional (3D) time-resolved studies. The purpose of this work is to describe how this has been accomplished. Specific technical enablers have been: improved gradients allowing reduced repetition times, improved k-space sampling and reconstruction methods, parallel acquisition particularly in two directions, and improved and higher count receiver coil arrays. These have collectively made high resolution time-resolved studies readily available for many anatomic regions. Depending on the application, approximate 1 mm isotropic resolution is now possible with frame times of several seconds. Clinical applications of time-resolved CE-MRA are briefly reviewed. PMID:26032598

  13. MRI diffusion tensor reconstruction with PROPELLER data acquisition.

    PubMed

    Cheryauka, Arvidas B; Lee, James N; Samsonov, Alexei A; Defrise, Michel; Gullberg, Grant T

    2004-02-01

    MRI diffusion imaging is effective in measuring the diffusion tensor in brain, cardiac, liver, and spinal tissue. Diffusion tensor tomography MRI (DTT MRI) method is based on reconstructing the diffusion tensor field from measurements of projections of the tensor field. Projections are obtained by appropriate application of rotated diffusion gradients. In the present paper, the potential of a novel data acquisition scheme, PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction), is examined in combination with DTT MRI for its capability and sufficiency for diffusion imaging. An iterative reconstruction algorithm is used to reconstruct the diffusion tensor field from rotated diffusion weighted blades by appropriate rotated diffusion gradients. DTT MRI with PROPELLER data acquisition shows significant potential to reduce the number of weighted measurements, avoid ambiguity in reconstructing diffusion tensor parameters, increase signal-to-noise ratio, and decrease the influence of signal distortion.

  14. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  15. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  16. Parallel Logic Programming and Parallel Systems Software and Hardware

    DTIC Science & Technology

    1989-07-29

    Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted

  17. 76 FR 21076 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... on the Exchange's Internet Web site at http://www.directedge.com . \\3\\ A Member is any registered... Parallel D or Parallel 2D with the DRT (Dark routing technique) option on BZX. BZX charges $0.0020 per... the DRT (Dark routing technique) option on BZX or SCAN/STGY on Nasdaq OMX Exchange (``Nasdaq.'') BATS...

  18. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  19. Real-time digital filtering, event triggering, and tomographic reconstruction of JET soft x-ray data (abstract)

    NASA Astrophysics Data System (ADS)

    Edwards, A. W.; Blackler, K.; Gill, R. D.; van der Goot, E.; Holm, J.

    1990-10-01

    Based upon the experience gained with the present soft x-ray data acquisition system, new techniques are being developed which make extensive use of digital signal processors (DSPs). Digital filters make 13 further frequencies available in real time from the input sampling frequency of 200 kHz. In parallel, various algorithms running on further DSPs generate triggers in response to a range of events in the plasma. The sawtooth crash can be detected, for example, with a delay of only 50 μs from the onset of the collapse. The trigger processor interacts with the digital filter boards to ensure data of the appropriate frequency is recorded throughout a plasma discharge. An independent link is used to pass 780 and 24 Hz filtered data to a network of transputers. A full tomographic inversion and display of the 24 Hz data is carried out in real time using this 15 transputer array. The 780 Hz data are stored for immediate detailed playback following the pulse. Such a system could considerably improve the quality of present plasma diagnostic data which is, in general, sampled at one fixed frequency throughout a discharge. Further, it should provide valuable information towards designing diagnostic data acquisition systems for future long pulse operation machines when a high degree of real-time processing will be required, while retaining the ability to detect, record, and analyze events of interest within such long plasma discharges.

  20. Highly efficient router-based readout algorithm for single-photon-avalanche-diode imagers for time-correlated experiments

    NASA Astrophysics Data System (ADS)

    Cominelli, A.; Acconcia, G.; Caldi, F.; Peronio, P.; Ghioni, M.; Rech, I.

    2018-02-01

    Time-Correlated Single Photon Counting (TCSPC) is a powerful tool that permits to record extremely fast optical signals with a precision down to few picoseconds. On the other hand, it is recognized as a relatively slow technique, especially when a large time-resolved image is acquired exploiting a single acquisition channel and a scanning system. During the last years, much effort has been made towards the parallelization of many acquisition and conversion chains. In particular, the exploitation of Single-Photon Avalanche Diodes in standard CMOS technology has paved the way to the integration of thousands of independent channels on the same chip. Unfortunately, the presence of a large number of detectors can give rise to a huge rate of events, which can easily lead to the saturation of the transfer rate toward the elaboration unit. As a result, a smart readout approach is needed to guarantee an efficient exploitation of the limited transfer bandwidth. We recently introduced a novel readout architecture, aimed at maximizing the counting efficiency of the system in typical TCSPC measurements. It features a limited number of high-performance converters, which are shared with a much larger array, while a smart routing logic provides a dynamic multiplexing between the two parts. Here we propose a novel routing algorithm, which exploits standard digital gates distributed among a large 32x32 array to ensure a dynamic connection between detectors and external time-measurement circuits.

  1. Post-Correlation Semi-Coherent Integration for High-Dynamic and Weak GPS Signal Acquisition (Preprint)

    DTIC Science & Technology

    2008-06-01

    provide the coverage. To enable weak GPS signal acquisition , one known technique at the receiver end is to extend the signal integration time...Han, “Block Accumulating Coherent Integration Over Extended Interval (BACIX) for Weak GPS Signal Acquisition ,” Proc. of ION-GNSS’06, Ft. Worth, TX...AFRL-RY-WP-TP-2008-1158 POST-CORRELATION SEMI-COHERENT INTEGRATION FOR HIGH-DYNAMIC AND WEAK GPS SIGNAL ACQUISITION (PREPRINT) Chun Yang

  2. Complexity of parallel implementation of domain decomposition techniques for elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, W.D.; Keyes, D.E.

    1988-03-01

    The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.

  3. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  4. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  5. Parallel independent evolution of pathogenicity within the genus Yersinia

    PubMed Central

    Reuter, Sandra; Connor, Thomas R.; Barquist, Lars; Walker, Danielle; Feltwell, Theresa; Harris, Simon R.; Fookes, Maria; Hall, Miquette E.; Petty, Nicola K.; Fuchs, Thilo M.; Corander, Jukka; Dufour, Muriel; Ringwood, Tamara; Savin, Cyril; Bouchier, Christiane; Martin, Liliane; Miettinen, Minna; Shubin, Mikhail; Riehm, Julia M.; Laukkanen-Ninios, Riikka; Sihvonen, Leila M.; Siitonen, Anja; Skurnik, Mikael; Falcão, Juliana Pfrimer; Fukushima, Hiroshi; Scholz, Holger C.; Prentice, Michael B.; Wren, Brendan W.; Parkhill, Julian; Carniel, Elisabeth; Achtman, Mark; McNally, Alan; Thomson, Nicholas R.

    2014-01-01

    The genus Yersinia has been used as a model system to study pathogen evolution. Using whole-genome sequencing of all Yersinia species, we delineate the gene complement of the whole genus and define patterns of virulence evolution. Multiple distinct ecological specializations appear to have split pathogenic strains from environmental, nonpathogenic lineages. This split demonstrates that contrary to hypotheses that all pathogenic Yersinia species share a recent common pathogenic ancestor, they have evolved independently but followed parallel evolutionary paths in acquiring the same virulence determinants as well as becoming progressively more limited metabolically. Shared virulence determinants are limited to the virulence plasmid pYV and the attachment invasion locus ail. These acquisitions, together with genomic variations in metabolic pathways, have resulted in the parallel emergence of related pathogens displaying an increasingly specialized lifestyle with a spectrum of virulence potential, an emerging theme in the evolution of other important human pathogens. PMID:24753568

  6. Use of software engineering techniques in the design of the ALEPH data acquisition system

    NASA Astrophysics Data System (ADS)

    Charity, T.; McClatchey, R.; Harvey, J.

    1987-08-01

    The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.

  7. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  8. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  9. Program Correctness, Verification and Testing for Exascale (Corvette)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Koushik; Iancu, Costin; Demmel, James W

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less

  10. Optical techniques: using coarse and detailed scans for the preventive acquisition of fingerprints with chromatic white-light sensors

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    The preventive application of automated latent fingerprint acquisition devices can enhance the Homeland Defence, e.g. by improving the border security. Here, contact-less optical acquisition techniques for the capture of traces are subject to research; chromatic white light sensors allow for multi-mode operation using coarse or detailed scans. The presence of potential fingerprints could be detected using fast coarse scans. Those Regions-of- Interest can be acquired afterwards with high-resolution detailed scans to allow for a verification or identification of individuals. An acquisition and analysis of fingerprint traces on different objects that are imported or pass borders might be a great enhancement for security. Additionally, if suspicious objects require a further investigation, an initial securing of potential fingerprints could be very useful. In this paper we show current research results for the coarse detection of fingerprints to prepare the detailed acquisition from various surface materials that are relevant for preventive applications.

  11. MR CAT scan: a modular approach for hybrid imaging.

    PubMed

    Hillenbrand, C; Hahn, D; Haase, A; Jakob, P M

    2000-07-01

    In this study, a modular concept for NMR hybrid imaging is presented. This concept essentially integrates different imaging modules in a sequential fashion and is therefore called CAT (combined acquisition technique). CAT is not a single specific measurement sequence, but rather a sequence design concept whereby distinct acquisition techniques with varying imaging parameters are employed in rapid succession in order to cover k-space. The power of the CAT approach is that it provides a high flexibility toward the acquisition optimization with respect to the available imaging time and the desired image quality. Important CAT sequence optimization steps include the appropriate choice of the k-space coverage ratio and the application of mixed bandwidth technology. Details of both the CAT methodology and possible CAT acquisition strategies, such as FLASH/EPI-, RARE/EPI- and FLASH/BURST-CAT are provided. Examples from imaging experiments in phantoms and healthy volunteers including mixed bandwidth acquisitions are provided to demonstrate the feasibility of the proposed CAT concept.

  12. Real-time detection and data acquisition system for the left ventricular outline. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Reiber, J. H. C.

    1976-01-01

    To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.

  13. Free-breathing volumetric fat/water separation by combining radial sampling, compressed sensing, and parallel imaging.

    PubMed

    Benkert, Thomas; Feng, Li; Sodickson, Daniel K; Chandarana, Hersh; Block, Kai Tobias

    2017-08-01

    Conventional fat/water separation techniques require that patients hold breath during abdominal acquisitions, which often fails and limits the achievable spatial resolution and anatomic coverage. This work presents a novel approach for free-breathing volumetric fat/water separation. Multiecho data are acquired using a motion-robust radial stack-of-stars three-dimensional GRE sequence with bipolar readout. To obtain fat/water maps, a model-based reconstruction is used that accounts for the off-resonant blurring of fat and integrates both compressed sensing and parallel imaging. The approach additionally enables generation of respiration-resolved fat/water maps by detecting motion from k-space data and reconstructing different respiration states. Furthermore, an extension is described for dynamic contrast-enhanced fat-water-separated measurements. Uniform and robust fat/water separation is demonstrated in several clinical applications, including free-breathing noncontrast abdominal examination of adults and a pediatric subject with both motion-averaged and motion-resolved reconstructions, as well as in a noncontrast breast exam. Furthermore, dynamic contrast-enhanced fat/water imaging with high temporal resolution is demonstrated in the abdomen and breast. The described framework provides a viable approach for motion-robust fat/water separation and promises particular value for clinical applications that are currently limited by the breath-holding capacity or cooperation of patients. Magn Reson Med 78:565-576, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Fast implementation for compressive recovery of highly accelerated cardiac cine MRI using the balanced sparse model.

    PubMed

    Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P

    2017-04-01

    Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  16. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  17. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  18. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  19. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  20. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  1. GET: A generic electronics system for TPCs and nuclear physics instrumentation

    NASA Astrophysics Data System (ADS)

    Pollacco, E. C.; Grinyer, G. F.; Abu-Nimeh, F.; Ahn, T.; Anvar, S.; Arokiaraj, A.; Ayyad, Y.; Baba, H.; Babo, M.; Baron, P.; Bazin, D.; Beceiro-Novo, S.; Belkhiria, C.; Blaizot, M.; Blank, B.; Bradt, J.; Cardella, G.; Carpenter, L.; Ceruti, S.; De Filippo, E.; Delagnes, E.; De Luca, S.; De Witte, H.; Druillole, F.; Duclos, B.; Favela, F.; Fritsch, A.; Giovinazzo, J.; Gueye, C.; Isobe, T.; Hellmuth, P.; Huss, C.; Lachacinski, B.; Laffoley, A. T.; Lebertre, G.; Legeard, L.; Lynch, W. G.; Marchi, T.; Martina, L.; Maugeais, C.; Mittig, W.; Nalpas, L.; Pagano, E. V.; Pancin, J.; Poleshchuk, O.; Pedroza, J. L.; Pibernat, J.; Primault, S.; Raabe, R.; Raine, B.; Rebii, A.; Renaud, M.; Roger, T.; Roussel-Chomaz, P.; Russotto, P.; Saccà, G.; Saillant, F.; Sizun, P.; Suzuki, D.; Swartz, J. A.; Tizon, A.; Usher, N.; Wittwer, G.; Yang, J. C.

    2018-04-01

    General Electronics for TPCs (GET) is a generic, reconfigurable and comprehensive electronics and data-acquisition system for nuclear physics instrumentation of up to 33792 channels. The system consists of a custom-designed ASIC for signal processing, front-end cards that each house 4 ASIC chips and digitize the data in parallel through 12-bit ADCs, concentration boards to read and process the digital data from up to 16 ASICs, a 3-level trigger and master clock module to trigger the system and synchronize the data, as well as all of the associated firmware, communication and data-acquisition software. An overview of the system including its specifications and measured performances are presented.

  2. Microcomputer data acquisition and control.

    PubMed

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  3. Decomposition method for fast computation of gigapixel-sized Fresnel holograms on a graphics processing unit cluster.

    PubMed

    Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2018-04-20

    A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.

  4. Living in a digital world: features and applications of FPGA in photon detection

    NASA Astrophysics Data System (ADS)

    Arnesano, Cosimo

    Optical spectroscopy and imaging outcomes rely upon many factors; one of the most critical is the photon acquisition and processing method employed. For some types of measurements it may be crucial to acquire every single photon quickly with temporal resolution, but in other cases it is important to acquire as many photons as possible, regardless of the time information about each of them. Fluorescence Lifetime Imaging Microscopy belongs to the first case, where the information of the time of arrival of every single photon in every single pixel is fundamental in obtaining the desired information. Spectral tissue imaging belongs to the second case, where high photon density is needed in order to calculate the optical parameters necessary to build the spectral image. In both cases, the current instrumentation suffers from limitations in terms of acquisition time, duty cycle, cost, and radio-frequency interference and emission. We developed the Digital Frequency-Domain approach for photon acquisition and processing purpose using new digital technology. This approach is based on the use of photon detectors in photon counting mode, and the digital heterodyning method to acquire data which is analyzed in the frequency domain to provide the information of the time of arrival of the photons . In conjunction with the use of pulsed laser sources, this method allows the determination of the time of arrival of the photons using the harmonic content of the frequency domain analysis. The parallel digital FD design is a powerful approach that others the possibility to implement a variety of different applications in fluorescence spectroscopy and microscopy. It can be applied to fluorometry, Fluorescence Lifetime Imaging (FLIM), and Fluorescence Correlation Spectroscopy (FCS), as well as multi frequency and multi wavelength tissue imaging in compact portable medical devices. It dramatically reduces the acquisition time from the several minutes scale to the seconds scale, performs signal processing in a digital fashion avoiding RF emission and it is extremely inexpensive. This development is the result of a systematic study carried on a previous design known as the FLIMBox developed as part of a thesis of another graduate student. The extensive work done in maximizing the performance of the original FLIMBox led us to develop a new hardware solution with exciting and promising results and potential that were not possible in the previous hardware realization, where the signal harmonic content was limited by the FPGA technology. The new design permits acquisition of a much larger harmonic content of the sample response when it is excited with a pulsed light source in one single measurement using the digital mixing principle that was developed in the original design. Furthermore, we used the parallel digital FD principle to perform tissue imaging through Diffuse Optical Spectroscopy (DOS) measurements. We integrated the FLIMBox in a new system that uses a supercontinuum white laser with high brightness as a single light source and photomultipliers with large detection area, both allowing a high penetration depth with extremely low power at the sample. The parallel acquisition, achieved by using the FlimBox, decreases the time required for standard serial systems that scan through all modulation frequencies. Furthermore, the all-digital acquisition avoids analog noise, removes the analog mixer of the conventional frequency domain approach, and it does not generate radio-frequencies, normally present in current analog systems. We are able to obtain a very sensitive acquisition due to the high signal to noise ratio (S/N). The successful results obtained by utilizing digital technology in photon acquisition and processing, prompted us to extend the use of FPGA to other applications, such as phosphorescence detection. Using the FPGA concept we proposed possible solutions to outstanding problems with the current technology. In this thesis I discuss new possible scenarios where new FPGA chips are applied to spectral tissue imaging.

  5. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  6. 78 FR 59859 - Defense Federal Acquisition Regulation Supplement: Allowability of Legal Costs for Whistleblower...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... considered in the formation of a final rule. ADDRESSES: Submit comments identified by DFARS Case 2013-D022... (for DoD only) and two FAR cases (for title 41 agencies), which are independent, but parallel... response to this interim rule in the formation of the final rule. List of Subjects in 48 CFR Parts 216 and...

  7. 75 FR 42444 - Change in Bank Control Notices; Acquisition of Shares of Bank or Bank Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ... Governors. Interested persons may express their views in writing to the Reserve Bank indicated for that...; WLR Recovery Fund IV, L.P.; WLR IV Parallel ESC, L.P.; Invesco North America Holdings, Inc.; Invesco WLR IV Associates LLC; WLR Recovery Associates IV LLC; WL Ross Group L.P.; and EL Vedado LLC, all of...

  8. Design and test of data acquisition systems for the Medipix2 chip based on PC standard interfaces

    NASA Astrophysics Data System (ADS)

    Fanti, Viviana; Marzeddu, Roberto; Piredda, Giuseppina; Randaccio, Paolo

    2005-07-01

    We describe two readout systems for hybrid detectors using the Medipix2 single photon counting chip, developed within the Medipix Collaboration. The Medipix2 chip (256×256 pixels, 55 μm pitch) has an active area of about 2 cm 2 and is bump-bonded to a pixel semiconductor array of silicon or other semiconductor material. The readout systems we are developing are based on two widespread standard PC interfaces: parallel port and USB (Universal Serial Bus) version 1.1. The parallel port is the simplest PC interface even if slow and the USB is a serial bus interface present nowadays on all PCs and offering good performances.

  9. TH-A-BRF-09: Integration of High-Resolution MRSI Into Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Cordova, J; Shu, H

    2014-06-15

    Purpose: Identification of a metabolite signature that shows significant tumor cell infiltration into normal brain in regions that do not appear abnormal on standard MRI scans would be extremely useful for radiation oncologists to choose optimal regions of brain to treat, and to quantify response beyond the MacDonald criteria. We report on integration of high-resolution magnetic resonance spectroscopic imaging (HR-MRSI) with radiation dose escalation treatment planning to define and target regions at high risk for recurrence. Methods: We propose to supplement standard MRI with a special technique performed on an MRI scanner to measure the metabolite levels within defined volumes.more » Metabolite imaging was acquired using an advanced MRSI technique combining 3D echo-planar spectroscopic imaging (EPSI) with parallel acquisition (GRAPPA) using a multichannel head coil that allows acquisition of whole brain metabolite maps with 108 μl resolution in 12 minutes implemented on a 3T MR scanner. Elevation in the ratio of two metabolites, choline (Cho, elevated in proliferating high-grade gliomas) and N-acetyl aspartate (NAA, a normal neuronal metabolite), was used to image infiltrating high-grade glioma cells in vivo. Results: The metabolite images were co-registered with standard contrast-enhanced T1-weighted MR images using in-house registration software and imported into the treatment-planning system. Regions with tumor infiltration are identified on the metabolic images and used to create adaptive IMRT plans that deliver a standard dose of 60 Gy to the standard target volume and an escalated dose of 75 Gy (or higher) to the most suspicious regions, identified as areas with elevated Cho/NAA ratio. Conclusion: We have implemented a state-of-the-art HR-MRSI technology that can generate metabolite maps of the entire brain in a clinically acceptable scan time, coupled with introduction of an imaging co-registration/ analysis program that combines MRSI data with standard imaging studies in a clinically useful fashion.« less

  10. Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Risi, Matthew D.

    Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.

  11. 3D motion picture of transparent gas flow by parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Fukuda, Takahito; Wang, Yexin; Xia, Peng; Kakue, Takashi; Nishio, Kenzo; Matoba, Osamu

    2018-03-01

    Parallel phase-shifting digital holography is a technique capable of recording three-dimensional (3D) motion picture of dynamic object, quantitatively. This technique can record single hologram of an object with an image sensor having a phase-shift array device and reconstructs the instantaneous 3D image of the object with a computer. In this technique, a single hologram in which the multiple holograms required for phase-shifting digital holography are multiplexed by using space-division multiplexing technique pixel by pixel. We demonstrate 3D motion picture of dynamic and transparent gas flow recorded and reconstructed by the technique. A compressed air duster was used to generate the gas flow. A motion picture of the hologram of the gas flow was recorded at 180,000 frames/s by parallel phase-shifting digital holography. The phase motion picture of the gas flow was reconstructed from the motion picture of the hologram. The Abel inversion was applied to the phase motion picture and then the 3D motion picture of the gas flow was obtained.

  12. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  13. Multiline 3D beamforming using micro-beamformed datasets for pediatric transesophageal echocardiography

    NASA Astrophysics Data System (ADS)

    Bera, D.; Raghunathan, S. B.; Chen, C.; Chen, Z.; Pertijs, M. A. P.; Verweij, M. D.; Daeichin, V.; Vos, H. J.; van der Steen, A. F. W.; de Jong, N.; Bosch, J. G.

    2018-04-01

    Until now, no matrix transducer has been realized for 3D transesophageal echocardiography (TEE) in pediatric patients. In 3D TEE with a matrix transducer, the biggest challenges are to connect a large number of elements to a standard ultrasound system, and to achieve a high volume rate (>200 Hz). To address these issues, we have recently developed a prototype miniaturized matrix transducer for pediatric patients with micro-beamforming and a small central transmitter. In this paper we propose two multiline parallel 3D beamforming techniques (µBF25 and µBF169) using the micro-beamformed datasets from 25 and 169 transmit events to achieve volume rates of 300 Hz and 44 Hz, respectively. Both the realizations use angle-weighted combination of the neighboring overlapping sub-volumes to avoid artifacts due to sharp intensity changes introduced by parallel beamforming. In simulation, the image quality in terms of the width of the point spread function (PSF), lateral shift invariance and mean clutter level for volumes produced by µBF25 and µBF169 are similar to the idealized beamforming using a conventional single-line acquisition with a fully-sampled matrix transducer (FS4k, 4225 transmit events). For completeness, we also investigated a 9 transmit-scheme (3  ×  3) that allows even higher frame rates but found worse B-mode image quality with our probe. The simulations were experimentally verified by acquiring the µBF datasets from the prototype using a Verasonics V1 research ultrasound system. For both µBF169 and µBF25, the experimental PSFs were similar to the simulated PSFs, but in the experimental PSFs, the clutter level was ~10 dB higher. Results indicate that the proposed multiline 3D beamforming techniques with the prototype matrix transducer are promising candidates for real-time pediatric 3D TEE.

  14. Designed a web crawler which oriented network public opinion data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Ma, Hui; Gao, Ying

    2015-12-01

    The paper describes the meaning of network public opinion and the network public opinion research of data acquisition technique. Designed and implemented a web crawler which oriented network public opinion data acquisition. Insufficient analysis of the generic web crawler, using asynchronous Socket, DNS cache, and queue downloads to improve its bottom story frame, increase the speed of collecting.

  15. Acquisition of Dental Skills in Preclinical Technique Courses: Influence of Spatial and Manual Abilities

    ERIC Educational Resources Information Center

    Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo

    2016-01-01

    Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two…

  16. The Making of a Government LSI - From Warfare Capability to Operational System

    DTIC Science & Technology

    2015-04-30

    continues to evolve and implement Lead System Integrator (LSI) acquisition strategies, they have started to define numerous program initiatives that...employ more integrated engineering and management processes and techniques. These initiatives are developing varying acquisition approaches that define (1...government LSI transformation. Navy Systems Commands have begun adding a higher level of integration into their acquisition process with the

  17. Automatic recognition of vector and parallel operations in a higher level language

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1971-01-01

    A compiler for recognizing statements of a FORTRAN program which are suited for fast execution on a parallel or pipeline machine such as Illiac-4, Star or ASC is described. The technique employs interval analysis to provide flow information to the vector/parallel recognizer. Where profitable the compiler changes scalar variables to subscripted variables. The output of the compiler is an extension to FORTRAN which shows parallel and vector operations explicitly.

  18. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  19. Conceptual Knowledge Acquisition in Biomedicine: A Methodological Review

    PubMed Central

    Payne, Philip R.O.; Mendonça, Eneida A.; Johnson, Stephen B.; Starren, Justin B.

    2007-01-01

    The use of conceptual knowledge collections or structures within the biomedical domain is pervasive, spanning a variety of applications including controlled terminologies, semantic networks, ontologies, and database schemas. A number of theoretical constructs and practical methods or techniques support the development and evaluation of conceptual knowledge collections. This review will provide an overview of the current state of knowledge concerning conceptual knowledge acquisition, drawing from multiple contributing academic disciplines such as biomedicine, computer science, cognitive science, education, linguistics, semiotics, and psychology. In addition, multiple taxonomic approaches to the description and selection of conceptual knowledge acquisition and evaluation techniques will be proposed in order to partially address the apparent fragmentation of the current literature concerning this domain. PMID:17482521

  20. Generation of High-Quality SWATH® Acquisition Data for Label-free Quantitative Proteomics Studies Using TripleTOF® Mass Spectrometers

    PubMed Central

    Schilling, Birgit; Gibson, Bradford W.; Hunter, Christie L.

    2017-01-01

    Data-independent acquisition is a powerful mass spectrometry technique that enables comprehensive MS and MS/MS analysis of all detectable species, providing an information rich data file that can be mined deeply. Here, we describe how to acquire high-quality SWATH® Acquisition data to be used for large quantitative proteomic studies. We specifically focus on using variable sized Q1 windows for acquisition of MS/MS data for generating higher specificity quantitative data. PMID:28188533

  1. Research into the development of a knowledge acquisition taxonomy

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.

    1991-01-01

    Monthly progress reports for September 1990 to January 1991 are given. Topics that are briefly covered include problem solving and learning taxonomies, knowledge acquisition techniques, software design, air traffic control, and space shuttle flight control.

  2. Hybrid cardiac imaging with MR-CAT scan: a feasibility study.

    PubMed

    Hillenbrand, C; Sandstede, J; Pabst, T; Hahn, D; Haase, A; Jakob, P M

    2000-06-01

    We demonstrate the feasibility of a new versatile hybrid imaging concept, the combined acquisition technique (CAT), for cardiac imaging. The cardiac CAT approach, which combines new methodology with existing technology, essentially integrates fast low-angle shot (FLASH) and echoplanar imaging (EPI) modules in a sequential fashion, whereby each acquisition module is employed with independently optimized imaging parameters. One important CAT sequence optimization feature is the ability to use different bandwidths for different acquisition modules. Twelve healthy subjects were imaged using three cardiac CAT acquisition strategies: a) CAT was used to reduce breath-hold duration times while maintaining constant spatial resolution; b) CAT was used to increase spatial resolution in a given breath-hold time; and c) single-heart beat CAT imaging was performed. The results obtained demonstrate the feasibility of cardiac imaging using the CAT approach and the potential of this technique to accelerate the imaging process with almost conserved image quality. Copyright 2000 Wiley-Liss, Inc.

  3. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  4. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  5. Parallel k-means++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique.more » We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.« less

  6. Hardware Implementation of Multiple Fan Beam Projection Technique in Optical Fibre Process Tomography

    PubMed Central

    Rahim, Ruzairi Abdul; Fazalul Rahiman, Mohd Hafiz; Leong, Lai Chen; Chan, Kok San; Pang, Jon Fea

    2008-01-01

    The main objective of this project is to implement the multiple fan beam projection technique using optical fibre sensors with the aim to achieve a high data acquisition rate. Multiple fan beam projection technique here is defined as allowing more than one emitter to transmit light at the same time using the switch-mode fan beam method. For the thirty-two pairs of sensors used, the 2-projection technique and 4-projection technique are being investigated. Sixteen sets of projections will complete one frame of light emission for the 2-projection technique while eight sets of projection will complete one frame of light emission for the 4-projection technique. In order to facilitate data acquisition process, PIC microcontroller and the sample and hold circuit are being used. This paper summarizes the hardware configuration and design for this project. PMID:27879885

  7. Optimisation of radiation dose and image quality in mobile neonatal chest radiography.

    PubMed

    Hinojos-Armendáriz, V I; Mejía-Rosales, S J; Franco-Cabrera, M C

    2018-05-01

    To optimise the radiation dose and image quality for chest radiography in the neonatal intensive care unit (NICU) by increasing the mean beam energy. Two techniques for the acquisition of NICU AP chest X-ray images were compared for image quality and radiation dose. 73 images were acquired using a standard technique (56 kV, 3.2 mAs and no additional filtration) and 90 images with a new technique (62 kV, 2 mAs and 2 mm Al filtration). The entrance surface air kerma (ESAK) was measured using a phantom and compared between the techniques and against established diagnostic reference levels (DRL). Images were evaluated using seven image quality criteria independently by three radiologists. Images quality and radiation dose were compared statistically between the standard and new techniques. The maximum ESAK for the new technique was 40.20 μGy, 43.7% of the ESAK of the standard technique. Statistical evaluation demonstrated no significant differences in image quality between the two acquisition techniques. Based on the techniques and acquisition factors investigated within this study, it is possible to lower the radiation dose without any significant effects on image quality by adding filtration (2 mm Al) and increasing the tube potential. Such steps are relatively simple to undertake and as such, other departments should consider testing and implementing this dose reduction strategy within clinical practice where appropriate. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  8. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  9. Technical advances in proteomics: new developments in data-independent acquisition.

    PubMed

    Hu, Alex; Noble, William S; Wolf-Yadlin, Alejandro

    2016-01-01

    The ultimate aim of proteomics is to fully identify and quantify the entire complement of proteins and post-translational modifications in biological samples of interest. For the last 15 years, liquid chromatography-tandem mass spectrometry (LC-MS/MS) in data-dependent acquisition (DDA) mode has been the standard for proteomics when sampling breadth and discovery were the main objectives; multiple reaction monitoring (MRM) LC-MS/MS has been the standard for targeted proteomics when precise quantification, reproducibility, and validation were the main objectives. Recently, improvements in mass spectrometer design and bioinformatics algorithms have resulted in the rediscovery and development of another sampling method: data-independent acquisition (DIA). DIA comprehensively and repeatedly samples every peptide in a protein digest, producing a complex set of mass spectra that is difficult to interpret without external spectral libraries. Currently, DIA approaches the identification breadth of DDA while achieving the reproducible quantification characteristic of MRM or its newest version, parallel reaction monitoring (PRM). In comparative de novo identification and quantification studies in human cell lysates, DIA identified up to 89% of the proteins detected in a comparable DDA experiment while providing reproducible quantification of over 85% of them. DIA analysis aided by spectral libraries derived from prior DIA experiments or auxiliary DDA data produces identification and quantification as reproducible and precise as that achieved by MRM/PRM, except on low‑abundance peptides that are obscured by stronger signals. DIA is still a work in progress toward the goal of sensitive, reproducible, and precise quantification without external spectral libraries. New software tools applied to DIA analysis have to deal with deconvolution of complex spectra as well as proper filtering of false positives and false negatives. However, the future outlook is positive, and various researchers are working on novel bioinformatics techniques to address these issues and increase the reproducibility, fidelity, and identification breadth of DIA.

  10. Six-minute magnetic resonance imaging protocol for evaluation of acute ischemic stroke: pushing the boundaries.

    PubMed

    Nael, Kambiz; Khan, Rihan; Choudhary, Gagandeep; Meshksar, Arash; Villablanca, Pablo; Tay, Jennifer; Drake, Kendra; Coull, Bruce M; Kidwell, Chelsea S

    2014-07-01

    If magnetic resonance imaging (MRI) is to compete with computed tomography for evaluation of patients with acute ischemic stroke, there is a need for further improvements in acquisition speed. Inclusion criteria for this prospective, single institutional study were symptoms of acute ischemic stroke within 24 hours onset, National Institutes of Health Stroke Scale ≥3, and absence of MRI contraindications. A combination of echo-planar imaging (EPI) and a parallel acquisition technique were used on a 3T magnetic resonance (MR) scanner to accelerate the acquisition time. Image analysis was performed independently by 2 neuroradiologists. A total of 62 patients met inclusion criteria. A repeat MRI scan was performed in 22 patients resulting in a total of 84 MRIs available for analysis. Diagnostic image quality was achieved in 100% of diffusion-weighted imaging, 100% EPI-fluid attenuation inversion recovery imaging, 98% EPI-gradient recalled echo, 90% neck MR angiography and 96% of brain MR angiography, and 94% of dynamic susceptibility contrast perfusion scans with interobserver agreements (k) ranging from 0.64 to 0.84. Fifty-nine patients (95%) had acute infarction. There was good interobserver agreement for EPI-fluid attenuation inversion recovery imaging findings (k=0.78; 95% confidence interval, 0.66-0.87) and for detection of mismatch classification using dynamic susceptibility contrast-Tmax (k=0.92; 95% confidence interval, 0.87-0.94). Thirteen acute intracranial hemorrhages were detected on EPI-gradient recalled echo by both observers. A total of 68 and 72 segmental arterial stenoses were detected on contrast-enhanced MR angiography of the neck and brain with k=0.93, 95% confidence interval, 0.84 to 0.96 and 0.87, 95% confidence interval, 0.80 to 0.90, respectively. A 6-minute multimodal MR protocol with good diagnostic quality is feasible for the evaluation of patients with acute ischemic stroke and can result in significant reduction in scan time rivaling that of the multimodal computed tomographic protocol. © 2014 American Heart Association, Inc.

  11. Highly accelerated intracranial 4D flow MRI: evaluation of healthy volunteers and patients with intracranial aneurysms.

    PubMed

    Liu, Jing; Koskas, Louise; Faraji, Farshid; Kao, Evan; Wang, Yan; Haraldsson, Henrik; Kefayati, Sarah; Zhu, Chengcheng; Ahn, Sinyeob; Laub, Gerhard; Saloner, David

    2018-04-01

    To evaluate an accelerated 4D flow MRI method that provides high temporal resolution in a clinically feasible acquisition time for intracranial velocity imaging. Accelerated 4D flow MRI was developed by using a pseudo-random variable-density Cartesian undersampling strategy (CIRCUS) with the combination of k-t, parallel imaging and compressed sensing image reconstruction techniques (k-t SPARSE-SENSE). Four-dimensional flow data were acquired on five healthy volunteers and eight patients with intracranial aneurysms using CIRCUS (acceleration factor of R = 4, termed CIRCUS4) and GRAPPA (R = 2, termed GRAPPA2) as the reference method. Images with three times higher temporal resolution (R = 12, CIRCUS12) were also reconstructed from the same acquisition as CIRCUS4. Qualitative and quantitative image assessment was performed on the images acquired with different methods, and complex flow patterns in the aneurysms were identified and compared. Four-dimensional flow MRI with CIRCUS was achieved in 5 min and allowed further improved temporal resolution of <30 ms. Volunteer studies showed similar qualitative and quantitative evaluation obtained with the proposed approach compared to the reference (overall image scores: GRAPPA2 3.2 ± 0.6; CIRCUS4 3.1 ± 0.7; CIRCUS12 3.3 ± 0.4; difference of the peak velocities: -3.83 ± 7.72 cm/s between CIRCUS4 and GRAPPA2, -1.72 ± 8.41 cm/s between CIRCUS12 and GRAPPA2). In patients with intracranial aneurysms, the higher temporal resolution improved capturing of the flow features in intracranial aneurysms (pathline visualization scores: GRAPPA2 2.2 ± 0.2; CIRCUS4 2.5 ± 0.5; CIRCUS12 2.7 ± 0.6). The proposed rapid 4D flow MRI with a high temporal resolution is a promising tool for evaluating intracranial aneurysms in a clinically feasible acquisition time.

  12. 48 CFR 2015.303 - Responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Responsibilities. 2015.303 Section 2015.303 Federal Acquisition Regulations System NUCLEAR REGULATORY COMMISSION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 2015.303...

  13. The use of artificial intelligence techniques to improve the multiple payload integration process

    NASA Technical Reports Server (NTRS)

    Cutts, Dannie E.; Widgren, Brian K.

    1992-01-01

    A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.

  14. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  15. WFIRST: Science from the Guest Investigator and Parallel Observation Programs

    NASA Astrophysics Data System (ADS)

    Postman, Marc; Nataf, David; Furlanetto, Steve; Milam, Stephanie; Robertson, Brant; Williams, Ben; Teplitz, Harry; Moustakas, Leonidas; Geha, Marla; Gilbert, Karoline; Dickinson, Mark; Scolnic, Daniel; Ravindranath, Swara; Strolger, Louis; Peek, Joshua; Marc Postman

    2018-01-01

    The Wide Field InfraRed Survey Telescope (WFIRST) mission will provide an extremely rich archival dataset that will enable a broad range of scientific investigations beyond the initial objectives of the proposed key survey programs. The scientific impact of WFIRST will thus be significantly expanded by a robust Guest Investigator (GI) archival research program. We will present examples of GI research opportunities ranging from studies of the properties of a variety of Solar System objects, surveys of the outer Milky Way halo, comprehensive studies of cluster galaxies, to unique and new constraints on the epoch of cosmic re-ionization and the assembly of galaxies in the early universe.WFIRST will also support the acquisition of deep wide-field imaging and slitless spectroscopic data obtained in parallel during campaigns with the coronagraphic instrument (CGI). These parallel wide-field imager (WFI) datasets can provide deep imaging data covering several square degrees at no impact to the scheduling of the CGI program. A competitively selected program of well-designed parallel WFI observation programs will, like the GI science above, maximize the overall scientific impact of WFIRST. We will give two examples of parallel observations that could be conducted during a proposed CGI program centered on a dozen nearby stars.

  16. Selecting and Acquiring Art Materials in the Academic Library: Meeting the Needs of the Studio Artist

    ERIC Educational Resources Information Center

    Lorenzen, Elizabeth A.

    2004-01-01

    As technology is shaping today's art world, parallel changes are happening in the ways art book collections are identified and acquired. The purpose of this article is to identify the changes transpiring in the worlds of the artist and library acquisitions, and to evaluate the ways in which the changes effected by technological applications in the…

  17. Desorption Electrospray Ionization Mass Spectrometry (DESI-MS) Analysis of Organophosphorus Chemical Warfare Agents: Rapid Acquisition of Time-Aligned Parallel (TAP) Fragmentation Data

    DTIC Science & Technology

    2010-06-01

    phase microextraction. Anal. Chem., 69, 1866-72. [39] Sng , M.T. and Ng ,W.F. (1999). In-situ derivatisation of degradation products of chemical warfare...Chromatogr. A., 1141, 151-157. [46] Lee, H.S.N., Sng , M.T., Basheer, C., and Lee, H.K. (2007). Determination of degradation products of chemical

  18. DataForge: Modular platform for data storage and analysis

    NASA Astrophysics Data System (ADS)

    Nozik, Alexander

    2018-04-01

    DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.

  19. Polarization lidar for atmospheric monitoring

    NASA Astrophysics Data System (ADS)

    Liu, Qiaojun; Wu, Chengxuan; Yuk Sun Cheng, Andrew; Wang, Zhangjun; Meng, Xiangqian; Chen, Chao; Li, Xianxin; Liu, Xingtao; Zhang, Hao; Zong, Fangyi

    2018-04-01

    Aerosol plays an important role in global climate and weather changes. Polarization lidar captures parallel and perpendicular signals from atmosphere to research aerosols. The lidar system we used has three emission wavelengths and could obtain the atmospheric aerosol extinction coefficient, backscattering coefficient and depolarization ratio. In this paper, the design of the lidar is described. The methods of data acquisition and inversion are given. Some recent results are presented.

  20. High density event-related potential data acquisition in cognitive neuroscience.

    PubMed

    Slotnick, Scott D

    2010-04-16

    Functional magnetic resonance imaging (fMRI) is currently the standard method of evaluating brain function in the field of Cognitive Neuroscience, in part because fMRI data acquisition and analysis techniques are readily available. Because fMRI has excellent spatial resolution but poor temporal resolution, this method can only be used to identify the spatial location of brain activity associated with a given cognitive process (and reveals virtually nothing about the time course of brain activity). By contrast, event-related potential (ERP) recording, a method that is used much less frequently than fMRI, has excellent temporal resolution and thus can track rapid temporal modulations in neural activity. Unfortunately, ERPs are under utilized in Cognitive Neuroscience because data acquisition techniques are not readily available and low density ERP recording has poor spatial resolution. In an effort to foster the increased use of ERPs in Cognitive Neuroscience, the present article details key techniques involved in high density ERP data acquisition. Critically, high density ERPs offer the promise of excellent temporal resolution and good spatial resolution (or excellent spatial resolution if coupled with fMRI), which is necessary to capture the spatial-temporal dynamics of human brain function.

  1. Improving both imaging speed and spatial resolution in MR-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Hall, Walter A.; Truwit, Charles L.

    2002-05-01

    A robust near real-time MRI based surgical guidance scheme has been developed and used in neurosurgical procedure performed in our combined 1.5 Tesla MR operating room. Because of the increased susceptibility difference in the area of surgical site during surgery, the preferred real- time imaging technique is a single shot imaging sequence based on the concept of the half acquisition with turbo spin echoes (HASTE). In order to maintain sufficient spatial resolution for visualizing the surgical devices, such as a biopsy needle and catheter, we used focused field of view (FOV) in the phase-encoding (PE) direction coupled with an out-volume signal suppression (OVS) technique. The key concept of the method is to minimize the total number of the required phase encoding steps and the effective echo time (TE) as well as the longest TE for the high spatial encoding step. The concept has been first demonstrated with a phantom experiment, which showed when the water was doped with Gd- DTPA to match the relaxation rates of the brain tissue there was a significant spatial blurring primarily along the phase encoding direction if the conventional HASTE technique, and the new scheme indeed minimized the spatial blur in the resulting image and improved the needle visualization as anticipated. Using the new scheme in a typical MR-guided neurobiopsy procedure, the brain biopsy needle was easily seen against the tissue background with minimal blurring due the inevitable T2 signal decay even when the PE direction was set parallel to the needle axis. This MR based guidance technique has practically allowed neurosurgeons to visualize the biopsy needle and to monitor its insertion with a better certainty at near real-time pace.

  2. PROPELLER technique to improve image quality of MRI of the shoulder.

    PubMed

    Dietrich, Tobias J; Ulbrich, Erika J; Zanetti, Marco; Fucentese, Sandro F; Pfirrmann, Christian W A

    2011-12-01

    The purpose of this article is to evaluate the use of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) technique for artifact reduction and overall image quality improvement for intermediate-weighted and T2-weighted MRI of the shoulder. One hundred eleven patients undergoing MR arthrography of the shoulder were included. A coronal oblique intermediate-weighted turbo spin-echo (TSE) sequence with fat suppression and a sagittal oblique T2-weighted TSE sequence with fat suppression were obtained without (standard) and with the PROPELLER technique. Scanning time increased from 3 minutes 17 seconds to 4 minutes 17 seconds (coronal oblique plane) and from 2 minutes 52 seconds to 4 minutes 10 seconds (sagittal oblique) using PROPELLER. Two radiologists graded image artifacts, overall image quality, and delineation of several anatomic structures on a 5-point scale (5, no artifact, optimal diagnostic quality; and 1, severe artifacts, diagnostically not usable). The Wilcoxon signed rank test was used to compare the data of the standard and PROPELLER images. Motion artifacts were significantly reduced in PROPELLER images (p < 0.001). Observer 1 rated motion artifacts with diagnostic impairment in one patient on coronal oblique PROPELLER images compared with 33 patients on standard images. Ratings for the sequences with PROPELLER were significantly better for overall image quality (p < 0.001). Observer 1 noted an overall image quality with diagnostic impairment in nine patients on sagittal oblique PROPELLER images compared with 23 patients on standard MRI. The PROPELLER technique for MRI of the shoulder reduces the number of sequences with diagnostic impairment as a result of motion artifacts and increases image quality compared with standard TSE sequences. PROPELLER sequences increase the acquisition time.

  3. Spatial Angular Compounding Technique for H-Scan Ultrasound Imaging.

    PubMed

    Khairalseed, Mawia; Xiong, Fangyuan; Kim, Jung-Whan; Mattrey, Robert F; Parker, Kevin J; Hoyt, Kenneth

    2018-01-01

    H-Scan is a new ultrasound imaging technique that relies on matching a model of pulse-echo formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. This technique may be beneficial in the measurement of relative scatterer sizes and in cancer therapy, particularly for early response to drug treatment. Because current H-scan techniques use focused ultrasound data acquisitions, spatial resolution degrades away from the focal region and inherently affects relative scatterer size estimation. Although the resolution of ultrasound plane wave imaging can be inferior to that of traditional focused ultrasound approaches, the former exhibits a homogeneous spatial resolution throughout the image plane. The purpose of this study was to implement H-scan using plane wave imaging and investigate the impact of spatial angular compounding on H-scan image quality. Parallel convolution filters using two different Gaussian-weighted Hermite polynomials that describe ultrasound scattering events are applied to the radiofrequency data. The H-scan processing is done on each radiofrequency image plane before averaging to get the angular compounded image. The relative strength from each convolution is color-coded to represent relative scatterer size. Given results from a series of phantom materials, H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio. Preliminary in vivo H-scan imaging of tumor-bearing animals suggests this modality may be useful for monitoring early response to chemotherapeutic treatment. Overall, H-scan imaging using ultrasound plane waves and spatial angular compounding is a promising approach for visualizing the relative size and distribution of acoustic scattering sources. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  4. Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.

    PubMed

    Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D

    2018-05-01

    Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.

  5. Evolution and Advances in Satellite Analysis of Volcanoes

    NASA Astrophysics Data System (ADS)

    Dean, K. G.; Dehn, J.; Webley, P.; Bailey, J.

    2008-12-01

    Over the past 20 years satellite data used for monitoring and analysis of volcanic eruptions has evolved in terms of timeliness, access, distribution, resolution and understanding of volcanic processes. Initially satellite data was used for retrospective analysis but has evolved to proactive monitoring systems. Timely acquisition of data and the capability to distribute large data files paralleled advances in computer technology and was a critical component for near real-time monitoring. The sharing of these data and resulting discussions has improved our understanding of eruption processes and, even more importantly, their impact on society. To illustrate this evolution, critical scientific discoveries will be highlighted, including detection of airborne ash and sulfur dioxide, cloud-height estimates, prediction of ash cloud movement, and detection of thermal anomalies as precursor-signals to eruptions. AVO has been a leader in implementing many of these advances into an operational setting such as, automated eruption detection, database analysis systems, and remotely accessible web-based analysis systems. Finally, limitations resulting from trade-offs between resolution and how they impact some weakness in detection techniques and hazard assessments will be presented.

  6. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  7. Hyperspectral Imaging and Spectroscopy of Fluorescently Coupled Acyl-CoA: Cholesterol Acyltransferase in Insect Cells

    NASA Technical Reports Server (NTRS)

    Malak, H.; Mahtani, H.; Herman, P.; Vecer, J.; Lu, X.; Chang, T. Y.; Richmond, Robert C.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    A high-performance hyperspectral imaging module with high throughput of light suitable for low-intensity fluorescence microscopic imaging and subsequent analysis, including single-pixel-defined emission spectroscopy, was tested on Sf21 insect cells expressing green fluorescence associated with recombinant green fluorescent protein linked or not with the membrane protein acyl-CoA:cholesterol acyltransferase. The imager utilized the phenomenon of optical activity as a new technique providing information over a spectral range of 220-1400 nm, and was inserted between the microscope and an 8-bit CCD video-rate camera. The resulting fluorescence image did not introduce observable image aberrations. The images provided parallel acquisition of well resolved concurrent spatial and spectral information such that fluorescence associated with green fluorescent protein alone was demonstrated to be diffuse within the Sf21 insect cell, and that green fluorescence associated with the membrane protein was shown to be specifically concentrated within regions of the cell cytoplasm. Emission spectra analyzed from different regions of the fluorescence image showed blue shift specific for the regions of concentration associated with the membrane protein.

  8. Management of Transjugular Intrahepatic Portosystemic Shunt (TIPS)-associated Refractory Hepatic Encephalopathy by Shunt Reduction Using the Parallel Technique: Outcomes of a Retrospective Case Series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cookson, Daniel T., E-mail: danielthomascookson@yahoo.co.uk; Zaman, Zubayr; Gordon-Smith, James

    2011-02-15

    Purpose: To investigate the reproducibility and technical and clinical success of the parallel technique of transjugular intrahepatic portosystemic shunt (TIPS) reduction in the management of refractory hepatic encephalopathy (HE). Materials and Methods: A 10-mm-diameter self-expanding stent graft and a 5-6-mm-diameter balloon-expandable stent were placed in parallel inside the existing TIPS in 8 patients via a dual unilateral transjugular approach. Changes in portosystemic pressure gradient and HE grade were used as primary end points. Results: TIPS reduction was technically successful in all patients. Mean {+-} standard deviation portosystemic pressure gradient before and after shunt reduction was 4.9 {+-} 3.6 mmHg (range,more » 0-12 mmHg) and 10.5 {+-} 3.9 mmHg (range, 6-18 mmHg). Duration of follow-up was 137 {+-} 117.8 days (range, 18-326 days). Clinical improvement of HE occurred in 5 patients (62.5%) with resolution of HE in 4 patients (50%). Single episodes of recurrent gastrointestinal hemorrhage occurred in 3 patients (37.5%). These were self-limiting in 2 cases and successfully managed in 1 case by correction of coagulopathy and blood transfusion. Two of these patients (25%) died, one each of renal failure and hepatorenal failure. Conclusion: The parallel technique of TIPS reduction is reproducible and has a high technical success rate. A dual unilateral transjugular approach is advantageous when performing this procedure. The parallel technique allows repeat bidirectional TIPS adjustment and may be of significant clinical benefit in the management of refractory HE.« less

  9. Self-organizing map models of language acquisition

    PubMed Central

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  10. 48 CFR 1615.170 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Applicability. 1615.170 Section 1615.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT FEDERAL EMPLOYEES... Source Selection Processes and Techniques. 1615.170 Applicability. FAR subpart 15.1 has no practical...

  11. 48 CFR 2115.170 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Applicability. 2115.170 Section 2115.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT, FEDERAL EMPLOYEES... BY NEGOTIATION Source Selection Processes and Techniques 2115.170 Applicability. FAR subpart 15.1 has...

  12. 48 CFR 1615.170 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Applicability. 1615.170 Section 1615.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT FEDERAL EMPLOYEES... Source Selection Processes and Techniques. 1615.170 Applicability. FAR subpart 15.1 has no practical...

  13. 48 CFR 1615.170 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Applicability. 1615.170 Section 1615.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT FEDERAL EMPLOYEES... Source Selection Processes and Techniques 1615.170 Applicability. FAR subpart 15.1 has no practical...

  14. 48 CFR 1615.170 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Applicability. 1615.170 Section 1615.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT FEDERAL EMPLOYEES... Source Selection Processes and Techniques 1615.170 Applicability. FAR subpart 15.1 has no practical...

  15. 48 CFR 2115.170 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Applicability. 2115.170 Section 2115.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT, FEDERAL EMPLOYEES... BY NEGOTIATION Source Selection Processes and Techniques 2115.170 Applicability. FAR subpart 15.1 has...

  16. 48 CFR 2115.170 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Applicability. 2115.170 Section 2115.170 Federal Acquisition Regulations System OFFICE OF PERSONNEL MANAGEMENT, FEDERAL EMPLOYEES... BY NEGOTIATION Source Selection Processes and Techniques 2115.170 Applicability. FAR subpart 15.1 has...

  17. Observations of large parallel electric fields in the auroral ionosphere

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    Rocket borne measurements employing a double probe technique were used to gather evidence for the existence of electric fields in the auroral ionosphere having components parallel to the magnetic field direction. An analysis of possible experimental errors leads to the conclusion that no known uncertainties can account for the roughly 10 mV/m parallel electric fields that are observed.

  18. Parallel gene analysis with allele-specific padlock probes and tag microarrays

    PubMed Central

    Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats

    2003-01-01

    Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977

  19. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  20. A spherical parallel three degrees-of-freedom robot for ankle-foot neuro-rehabilitation.

    PubMed

    Malosio, Matteo; Negri, Simone Pio; Pedrocchi, Nicola; Vicentini, Federico; Caimmi, Marco; Molinari Tosatti, Lorenzo

    2012-01-01

    The ankle represents a fairly complex bone structure, resulting in kinematics that hinders a flawless robot-assisted recovery of foot motility in impaired subjects. The paper proposes a novel device for ankle-foot neuro-rehabilitation based on a mechatronic redesign of the remarkable Agile Eye spherical robot on the basis of clinical requisites. The kinematic design allows the positioning of the ankle articular center close to the machine rotation center with valuable benefits in term of therapy functions. The prototype, named PKAnkle, Parallel Kinematic machine for Ankle rehabilitation, provides a 6-axes load cell for the measure of subject interaction forces/torques, and it integrates a commercial EMG-acquisition system. Robot control provides active and passive therapeutic exercises.

  1. 77 FR 40552 - Federal Acquisition Regulation; Price Analysis Techniques

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... Regulation; Price Analysis Techniques AGENCY: Department of Defense (DoD), General Services Administration... price analysis technique in order to establish a fair and reasonable price. DATES: Interested parties....404-1(b)(2) addresses various price analysis techniques and procedures the Government may use to...

  2. Programming Probabilistic Structural Analysis for Parallel Processing Computer

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.

    1991-01-01

    The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.

  3. A Low Cost Concept for Data Acquisition Systems Applied to Decentralized Renewable Energy Plants

    PubMed Central

    Jucá, Sandro C. S.; Carvalho, Paulo C. M.; Brito, Fábio T.

    2011-01-01

    The present paper describes experiences of the use of monitoring and data acquisition systems (DAS) and proposes a new concept of a low cost DAS applied to decentralized renewable energy (RE) plants with an USB interface. The use of such systems contributes to disseminate these plants, recognizing in real time local energy resources, monitoring energy conversion efficiency and sending information concerning failures. These aspects are important, mainly for developing countries, where decentralized power plants based on renewable sources are in some cases the best option for supplying electricity to rural areas. Nevertheless, the cost of commercial DAS is still a barrier for a greater dissemination of such systems in developing countries. The proposed USB based DAS presents a new dual clock operation philosophy, in which the acquisition system contains two clock sources for parallel information processing from different communication protocols. To ensure the low cost of the DAS and to promote the dissemination of this technology in developing countries, the proposed data acquisition firmware and the software for USB microcontrollers programming is a free and open source software, executable in the Linux and Windows® operating systems. PMID:22346600

  4. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System

    PubMed Central

    Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883

  5. Anisotropic field-of-view shapes for improved PROPELLER imaging☆

    PubMed Central

    Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.

    2010-01-01

    The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039

  6. Data acquisition for the new muon g-2 experiment at Fermilab

    DOE PAGES

    Gohn, Wesley

    2015-12-23

    A new measurement of the anomalous magnetic moment of the muon, a μ ≡ (g - 2)/2, will be performed at the Fermi National Accelerator Laboratory. The most recent measurement, performed at Brookhaven National Laboratory and completed in 2001, shows a 3.3-3.6 standard deviation discrepancy with the Standard Model predictions for a μ. The new measurement will accumulate 21 times those statistics, measuring a μ to 140 ppb and reducing the uncertainty by a factor of 4. The data acquisition system for this experiment must have the ability to record deadtime-free records from 700 μs muon spills at a rawmore » data rate of 18 GB per second. Data will be collected using 1296 channels of μTCA-based 800 MSPS, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording and processing of detector signals during the spill. The system will be controlled using the MIDAS data acquisition software package. Lastly, the described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.« less

  7. A low cost concept for data acquisition systems applied to decentralized renewable energy plants.

    PubMed

    Jucá, Sandro C S; Carvalho, Paulo C M; Brito, Fábio T

    2011-01-01

    The present paper describes experiences of the use of monitoring and data acquisition systems (DAS) and proposes a new concept of a low cost DAS applied to decentralized renewable energy (RE) plants with an USB interface. The use of such systems contributes to disseminate these plants, recognizing in real time local energy resources, monitoring energy conversion efficiency and sending information concerning failures. These aspects are important, mainly for developing countries, where decentralized power plants based on renewable sources are in some cases the best option for supplying electricity to rural areas. Nevertheless, the cost of commercial DAS is still a barrier for a greater dissemination of such systems in developing countries. The proposed USB based DAS presents a new dual clock operation philosophy, in which the acquisition system contains two clock sources for parallel information processing from different communication protocols. To ensure the low cost of the DAS and to promote the dissemination of this technology in developing countries, the proposed data acquisition firmware and the software for USB microcontrollers programming is a free and open source software, executable in the Linux and Windows® operating systems.

  8. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System.

    PubMed

    Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.

  9. MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance

    PubMed Central

    2014-01-01

    Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441

  10. Data Acquisition for the New Muon g-2 Experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Gohn, Wesley

    2015-12-01

    A new measurement of the anomalous magnetic moment of the muon,aμ≡ (g - 2)/2, will be performed at the Fermi National Accelerator Laboratory. The most recent measurement, performed at Brookhaven National Laboratory and completed in 2001, shows a 3.3-3.6 standard deviation discrepancy with the Standard Model predictions for aμ. The new measurement will accumulate 21 times those statistics, measuring aμ to 140 ppb and reducing the uncertainty by a factor of 4. The data acquisition system for this experiment must have the ability to record deadtime-free records from 700 μs muon spills at a raw data rate of 18 GB per second. Data will be collected using 1296 channels of μTCA-based 800 MHz, 12 bit waveform digitizers and processed in a layered array of networked commodity processors with 24 GPUs working in parallel to perform a fast recording and processing of detector signals during the spill. The system will be controlled using the MIDAS data acquisition software package. The described data acquisition system is currently being constructed, and will be fully operational before the start of the experiment in 2017.

  11. Non-water-suppressed short-echo-time magnetic resonance spectroscopic imaging using a concentric ring k-space trajectory.

    PubMed

    Emir, Uzay E; Burns, Brian; Chiew, Mark; Jezzard, Peter; Thomas, M Albert

    2017-07-01

    Water-suppressed MRS acquisition techniques have been the standard MRS approach used in research and for clinical scanning to date. The acquisition of a non-water-suppressed MRS spectrum is used for artefact correction, reconstruction of phased-array coil data and metabolite quantification. Here, a two-scan metabolite-cycling magnetic resonance spectroscopic imaging (MRSI) scheme that does not use water suppression is demonstrated and evaluated. Specifically, the feasibility of acquiring and quantifying short-echo (T E  = 14 ms), two-dimensional stimulated echo acquisition mode (STEAM) MRSI spectra in the motor cortex is demonstrated on a 3 T MRI system. The increase in measurement time from the metabolite-cycling is counterbalanced by a time-efficient concentric ring k-space trajectory. To validate the technique, water-suppressed MRSI acquisitions were also performed for comparison. The proposed non-water-suppressed metabolite-cycling MRSI technique was tested for detection and correction of resonance frequency drifts due to subject motion and/or hardware instability, and the feasibility of high-resolution metabolic mapping over a whole brain slice was assessed. Our results show that the metabolite spectra and estimated concentrations are in agreement between non-water-suppressed and water-suppressed techniques. The achieved spectral quality, signal-to-noise ratio (SNR) > 20 and linewidth <7 Hz allowed reliable metabolic mapping of five major brain metabolites in the motor cortex with an in-plane resolution of 10 × 10 mm 2 in 8 min and with a Cramér-Rao lower bound of less than 20% using LCModel analysis. In addition, the high SNR of the water peak of the non-water-suppressed technique enabled voxel-wise single-scan frequency, phase and eddy current correction. These findings demonstrate that our non-water-suppressed metabolite-cycling MRSI technique can perform robustly on 3 T MRI systems and within a clinically feasible acquisition time. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  12. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  13. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  14. 48 CFR 2015.300 - Scope of subpart.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Scope of subpart. 2015.300 Section 2015.300 Federal Acquisition Regulations System NUCLEAR REGULATORY COMMISSION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection Processes and Techniques 2015.300 Scope...

  15. Density-based parallel skin lesion border detection with webCL

    PubMed Central

    2015-01-01

    Background Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Methods Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Results Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. Conclusions When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser. PMID:26423836

  16. Density-based parallel skin lesion border detection with webCL.

    PubMed

    Lemon, James; Kockara, Sinan; Halic, Tansel; Mete, Mutlu

    2015-01-01

    Dermoscopy is a highly effective and noninvasive imaging technique used in diagnosis of melanoma and other pigmented skin lesions. Many aspects of the lesion under consideration are defined in relation to the lesion border. This makes border detection one of the most important steps in dermoscopic image analysis. In current practice, dermatologists often delineate borders through a hand drawn representation based upon visual inspection. Due to the subjective nature of this technique, intra- and inter-observer variations are common. Because of this, the automated assessment of lesion borders in dermoscopic images has become an important area of study. Fast density based skin lesion border detection method has been implemented in parallel with a new parallel technology called WebCL. WebCL utilizes client side computing capabilities to use available hardware resources such as multi cores and GPUs. Developed WebCL-parallel density based skin lesion border detection method runs efficiently from internet browsers. Previous research indicates that one of the highest accuracy rates can be achieved using density based clustering techniques for skin lesion border detection. While these algorithms do have unfavorable time complexities, this effect could be mitigated when implemented in parallel. In this study, density based clustering technique for skin lesion border detection is parallelized and redesigned to run very efficiently on the heterogeneous platforms (e.g. tablets, SmartPhones, multi-core CPUs, GPUs, and fully-integrated Accelerated Processing Units) by transforming the technique into a series of independent concurrent operations. Heterogeneous computing is adopted to support accessibility, portability and multi-device use in the clinical settings. For this, we used WebCL, an emerging technology that enables a HTML5 Web browser to execute code in parallel for heterogeneous platforms. We depicted WebCL and our parallel algorithm design. In addition, we tested parallel code on 100 dermoscopy images and showed the execution speedups with respect to the serial version. Results indicate that parallel (WebCL) version and serial version of density based lesion border detection methods generate the same accuracy rates for 100 dermoscopy images, in which mean of border error is 6.94%, mean of recall is 76.66%, and mean of precision is 99.29% respectively. Moreover, WebCL version's speedup factor for 100 dermoscopy images' lesion border detection averages around ~491.2. When large amount of high resolution dermoscopy images considered in a usual clinical setting along with the critical importance of early detection and diagnosis of melanoma before metastasis, the importance of fast processing dermoscopy images become obvious. In this paper, we introduce WebCL and the use of it for biomedical image processing applications. WebCL is a javascript binding of OpenCL, which takes advantage of GPU computing from a web browser. Therefore, WebCL parallel version of density based skin lesion border detection introduced in this study can supplement expert dermatologist, and aid them in early diagnosis of skin lesions. While WebCL is currently an emerging technology, a full adoption of WebCL into the HTML5 standard would allow for this implementation to run on a very large set of hardware and software systems. WebCL takes full advantage of parallel computational resources including multi-cores and GPUs on a local machine, and allows for compiled code to run directly from the Web Browser.

  17. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  18. Joint carrier phase and frequency-offset estimation with parallel implementation for dual-polarization coherent receiver.

    PubMed

    Lu, Jianing; Li, Xiang; Fu, Songnian; Luo, Ming; Xiang, Meng; Zhou, Huibin; Tang, Ming; Liu, Deming

    2017-03-06

    We present dual-polarization complex-weighted, decision-aided, maximum-likelihood algorithm with superscalar parallelization (SSP-DP-CW-DA-ML) for joint carrier phase and frequency-offset estimation (FOE) in coherent optical receivers. By pre-compensation of the phase offset between signals in dual polarizations, the performance can be substantially improved. Meanwhile, with the help of modified SSP-based parallel implementation, the acquisition time of FO and the required number of training symbols are reduced by transferring the complex weights of the filters between adjacent buffers, where differential coding/decoding is not required. Simulation results show that the laser linewidth tolerance of our proposed algorithm is comparable to traditional blind phase search (BPS), while a complete FOE range of ± symbol rate/2 can be achieved. Finally, performance of our proposed algorithm is experimentally verified under the scenario of back-to-back (B2B) transmission using 10 Gbaud DP-16/32-QAM formats.

  19. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    PubMed

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system.

    PubMed

    Monfaredi, Reza; Wilson, Emmanuel; Azizi Koutenaei, Bamshad; Labrecque, Brendan; Leroy, Kristen; Goldie, James; Louis, Eric; Swerdlow, Daniel; Cleary, Kevin

    2015-02-01

    Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.

  1. Partitioning and packing mathematical simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  2. 100 Gbps Wireless System and Circuit Design Using Parallel Spread-Spectrum Sequencing

    NASA Astrophysics Data System (ADS)

    Scheytt, J. Christoph; Javed, Abdul Rehman; Bammidi, Eswara Rao; KrishneGowda, Karthik; Kallfass, Ingmar; Kraemer, Rolf

    2017-09-01

    In this article mixed analog/digital signal processing techniques based on parallel spread-spectrum sequencing (PSSS) and radio frequency (RF) carrier synchronization for ultra-broadband wireless communication are investigated on system and circuit level.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  4. Designing for Peta-Scale in the LSST Database

    NASA Astrophysics Data System (ADS)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  5. 10-channel fiber array fabrication technique for parallel optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Arauz, Lina J.; Luo, Yuan; Castillo, Jose E.; Kostuk, Raymond K.; Barton, Jennifer

    2007-02-01

    Optical Coherence Tomography (OCT) shows great promise for low intrusive biomedical imaging applications. A parallel OCT system is a novel technique that replaces mechanical transverse scanning with electronic scanning. This will reduce the time required to acquire image data. In this system an array of small diameter fibers is required to obtain an image in the transverse direction. Each fiber in the array is configured in an interferometer and is used to image one pixel in the transverse direction. In this paper we describe a technique to package 15μm diameter fibers on a siliconsilica substrate to be used in a 2mm endoscopic probe tip. Single mode fibers are etched to reduce the cladding diameter from 125μm to 15μm. Etched fibers are placed into a 4mm by 150μm trench in a silicon-silica substrate and secured with UV glue. Active alignment was used to simplify the lay out of the fibers and minimize unwanted horizontal displacement of the fibers. A 10-channel fiber array was built, tested and later incorporated into a parallel optical coherence system. This paper describes the packaging, testing, and operation of the array in a parallel OCT system.

  6. Ratiometric Raman Spectroscopy for Quantification of Protein Oxidative Damage

    PubMed Central

    Jiang, Dongping; Yanney, Michael; Zou, Sige; Sygula, Andrzej

    2009-01-01

    A novel ratiometric Raman spectroscopic (RMRS) method has been developed for quantitative determination of protein carbonyl levels. Oxidized bovine serum albumin (BSA) and oxidized lysozyme were used as model proteins to demonstrate this method. The technique involves conjugation of protein carbonyls with dinitrophenyl hydrazine (DNPH), followed by drop coating deposition Raman spectral acquisition (DCDR). The RMRS method is easy to implement as it requires only one conjugation reaction, a single spectral acquisition, and does not require sample calibration. Characteristic peaks from both protein and DNPH moieties are obtained in a single spectral acquisition, allowing the protein carbonyl level to be calculated from the peak intensity ratio. Detection sensitivity for the RMRS method is ~0.33 pmol carbonyl/measurement. Fluorescence and/or immunoassay based techniques only detect a signal from the labeling molecule and thus yield no structural or quantitative information for the modified protein while the RMRS technique provides for protein identification and protein carbonyl quantification in a single experiment. PMID:19457432

  7. Continued Data Acquisition Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwellenbach, David

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less

  8. Self-gated fetal cardiac MRI with tiny golden angle iGRASP: A feasibility study.

    PubMed

    Haris, Kostas; Hedström, Erik; Bidhult, Sebastian; Testud, Frederik; Maglaveras, Nicos; Heiberg, Einar; Hansson, Stefan R; Arheden, Håkan; Aletras, Anthony H

    2017-07-01

    To develop and assess a technique for self-gated fetal cardiac cine magnetic resonance imaging (MRI) using tiny golden angle radial sampling combined with iGRASP (iterative Golden-angle RAdial Sparse Parallel) for accelerated acquisition based on parallel imaging and compressed sensing. Fetal cardiac data were acquired from five volunteers in gestational week 29-37 at 1.5T using tiny golden angles for eddy currents reduction. The acquired multicoil radial projections were input to a principal component analysis-based compression stage. The cardiac self-gating (CSG) signal for cardiac gating was extracted from the acquired radial projections and the iGRASP reconstruction procedure was applied. In all acquisitions, a total of 4000 radial spokes were acquired within a breath-hold of less than 15 seconds using a balanced steady-state free precession pulse sequence. The images were qualitatively compared by two independent observers (on a scale of 1-4) to a single midventricular cine image from metric optimized gating (MOG) and real-time acquisitions. For iGRASP and MOG images, good overall image quality (2.8 ± 0.4 and 2.6 ± 1.3, respectively, for observer 1; 3.6 ± 0.5 and 3.4 ± 0.9, respectively, for observer 2) and cardiac diagnostic quality (3.8 ± 0.4 and 3.4 ± 0.9, respectively, for observer 1; 3.6 ± 0.5 and 3.6 ± 0.9, respectively, for observer 2) were obtained, with visualized myocardial thickening over the cardiac cycle and well-defined myocardial borders to ventricular lumen and liver/lung tissue. For iGRASP, MOG, and real time, left ventricular lumen diameter (14.1 ± 2.2 mm, 14.2 ± 1.9 mm, 14.7 ± 1.1 mm, respectively) and wall thickness (2.7 ± 0.3 mm, 2.6 ± 0.3 mm, 3.0 ± 0.4, respectively) showed agreement and no statistically significant difference was found (all P > 0.05). Images with iGRASP tended to have higher overall image quality scores compared with MOG and particularly real-time images, albeit not statistically significant in this feasibility study (P > 0.99 and P = 0.12, respectively). Fetal cardiac cine MRI can be performed with iGRASP using tiny golden angles and CSG. Comparison with other fetal cardiac cine MRI methods showed that the proposed method produces high-quality fetal cardiac reconstructions. 2 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:207-217. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Technical aspects of cardiac PET/MRI.

    PubMed

    Masuda, Atsuro; Nemoto, Ayaka; Takeishi, Yasuchika

    2018-06-01

    PET/MRI is a novel modality that enables to combine PET and MR images, and has significant potential to evaluate various cardiac diseases through the combination of PET molecular imaging and MRI functional imaging. Precise management of technical issues, however, is necessary for cardiac PET/MRI. This article describes several technical points, including patient preparation, MR attenuation correction, parallel acquisition of PET with MRI, clinical aspects, and image quality control.

  10. Prediction of pork quality parameters by applying fractals and data mining on MRI.

    PubMed

    Caballero, Daniel; Pérez-Palacios, Trinidad; Caro, Andrés; Amigo, José Manuel; Dahl, Anders B; ErsbØll, Bjarne K; Antequera, Teresa

    2017-09-01

    This work firstly investigates the use of MRI, fractal algorithms and data mining techniques to determine pork quality parameters non-destructively. The main objective was to evaluate the capability of fractal algorithms (Classical Fractal algorithm, CFA; Fractal Texture Algorithm, FTA and One Point Fractal Texture Algorithm, OPFTA) to analyse MRI in order to predict quality parameters of loin. In addition, the effect of the sequence acquisition of MRI (Gradient echo, GE; Spin echo, SE and Turbo 3D, T3D) and the predictive technique of data mining (Isotonic regression, IR and Multiple linear regression, MLR) were analysed. Both fractal algorithm, FTA and OPFTA are appropriate to analyse MRI of loins. The sequence acquisition, the fractal algorithm and the data mining technique seems to influence on the prediction results. For most physico-chemical parameters, prediction equations with moderate to excellent correlation coefficients were achieved by using the following combinations of acquisition sequences of MRI, fractal algorithms and data mining techniques: SE-FTA-MLR, SE-OPFTA-IR, GE-OPFTA-MLR, SE-OPFTA-MLR, with the last one offering the best prediction results. Thus, SE-OPFTA-MLR could be proposed as an alternative technique to determine physico-chemical traits of fresh and dry-cured loins in a non-destructive way with high accuracy. Copyright © 2017. Published by Elsevier Ltd.

  11. COMPUTER-AIDED DATA ACQUISITION FOR COMBUSTION EXPERIMENTS

    EPA Science Inventory

    The article describes the use of computer-aided data acquisition techniques to aid the research program of the Combustion Research Branch (CRB) of the U.S. EPA's Air and Energy Engineering Research Laboratory (AEERL) in Research Triangle Park, NC, in particular on CRB's bench-sca...

  12. Geo-referenced digital data acquisition and processing system using LiDAR technology : executive summary report.

    DOT National Transportation Integrated Search

    2006-02-01

    Problem: State-of-the-art airborne mapping is in major : transition, which affects both the data acquisition and : data processing technologies. The IT age has brought : powerful sensors and revolutionary new techniques to : acquire spatial data in l...

  13. 48 CFR 39.102 - Management of risk.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Management of risk. 39.102... CONTRACTING ACQUISITION OF INFORMATION TECHNOLOGY General 39.102 Management of risk. (a) Prior to entering... monitored, funding availability, and program management risk. (c) Appropriate techniques should be applied...

  14. Performance improvement of a binary quantized all-digital phase-locked loop with a new aided-acquisition technique

    NASA Astrophysics Data System (ADS)

    Sandoz, J.-P.; Steenaart, W.

    1984-12-01

    The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.

  15. Satellite-Derived Bathymetry: Accuracy Assessment on Depths Derivation Algorithm for Shallow Water Area

    NASA Astrophysics Data System (ADS)

    Said, N. M.; Mahmud, M. R.; Hasan, R. C.

    2017-10-01

    Over the years, the acquisition technique of bathymetric data has evolved from a shipborne platform to airborne and presently, utilising space-borne acquisition. The extensive development of remote sensing technology has brought in the new revolution to the hydrographic surveying. Satellite-Derived Bathymetry (SDB), a space-borne acquisition technique which derives bathymetric data from high-resolution multispectral satellite imagery for various purposes recently considered as a new promising technology in the hydrographic surveying industry. Inspiring by this latest developments, a comprehensive study was initiated by National Hydrographic Centre (NHC) and Universiti Teknologi Malaysia (UTM) to analyse SDB as a means for shallow water area acquisition. By adopting additional adjustment in calibration stage, a marginal improvement discovered on the outcomes from both Stumpf and Lyzenga algorithms where the RMSE values for the derived (predicted) depths were 1.432 meters and 1.728 meters respectively. This paper would deliberate in detail the findings from the study especially on the accuracy level and practicality of SDB over the tropical environmental setting in Malaysia.

  16. Neural convergence for language comprehension and grammatical class production in highly proficient bilinguals is independent of age of acquisition.

    PubMed

    Consonni, Monica; Cafiero, Riccardo; Marin, Dario; Tettamanti, Marco; Iadanza, Antonella; Fabbro, Franco; Perani, Daniela

    2013-05-01

    In bilinguals, native (L1) and second (L2) languages are processed by the same neural resources that can be modulated by age of second language acquisition (AOA), proficiency level, and daily language exposure and usage. AOA seems to particularly affect grammar processing, where a complete neural convergence has been shown only in bilinguals with parallel language acquisition from birth. Despite the fact that proficiency-related neuroanatomical differences have been well documented in language comprehension (LC) and production, few reports have addressed the influence of language exposure. A still unanswered question pertains to the role of AOA, when proficiency is comparably high across languages, with respect to its modulator effects both on LC and production. Here, we evaluated with fMRI during sentence comprehension and verb and noun production tasks, two groups of highly proficient bilinguals only differing in AOA. One group learned Italian and Friulian in parallel from birth, whereas the second group learned Italian between 3 and 6 years. All participants were highly exposed to both languages, but more to Italian than Friulian. The results indicate a complete overlap of neural activations for the comprehension of both languages, not only in bilinguals from birth, but also in late bilinguals. A slightly extra activation in the left thalamus for the less-exposed language confirms that exposure may affect language processing. Noteworthy, we report for the first time that, when proficiency and exposure are kept high, noun and verb production recruit the same neural networks for L1 and L2, independently of AOA. These results support the neural convergence hypothesis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI.

    PubMed

    Zhou, Ziwu; Han, Fei; Yan, Lirong; Wang, Danny J J; Hu, Peng

    2017-12-01

    To develop and evaluate an improved stack-of-stars radial sampling strategy for reducing streaking artifacts. The conventional stack-of-stars sampling strategy collects the same radial angle for every partition (slice) encoding. In an undersampled acquisition, such an aligned acquisition generates coherent aliasing patterns and introduces strong streaking artifacts. We show that by rotating the radial spokes in a golden-angle manner along the partition-encoding direction, the aliasing pattern is modified, resulting in improved image quality for gridding and more advanced reconstruction methods. Computer simulations were performed and phantom as well as in vivo images for three different applications were acquired. Simulation, phantom, and in vivo experiments confirmed that the proposed method was able to generate images with less streaking artifact and sharper structures based on undersampled acquisitions in comparison with the conventional aligned approach at the same acceleration factors. By combining parallel imaging and compressed sensing in the reconstruction, streaking artifacts were mostly removed with improved delineation of fine structures using the proposed strategy. We present a simple method to reduce streaking artifacts and improve image quality in 3D stack-of-stars acquisitions by re-arranging the radial spoke angles in the 3D partition direction, which can be used for rapid volumetric imaging. Magn Reson Med 78:2290-2298, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Dynamic autofocus for continuous-scanning time-delay-and-integration image acquisition in automated microscopy.

    PubMed

    Bravo-Zanoguera, Miguel E; Laris, Casey A; Nguyen, Lam K; Oliva, Mike; Price, Jeffrey H

    2007-01-01

    Efficient image cytometry of a conventional microscope slide means rapid acquisition and analysis of 20 gigapixels of image data (at 0.3-microm sampling). The voluminous data motivate increased acquisition speed to enable many biomedical applications. Continuous-motion time-delay-and-integrate (TDI) scanning has the potential to speed image acquisition while retaining sensitivity, but the challenge of implementing high-resolution autofocus operating simultaneously with acquisition has limited its adoption. We develop a dynamic autofocus system for this need using: 1. a "volume camera," consisting of nine fiber optic imaging conduits to charge-coupled device (CCD) sensors, that acquires images in parallel from different focal planes, 2. an array of mixed analog-digital processing circuits that measure the high spatial frequencies of the multiple image streams to create focus indices, and 3. a software system that reads and analyzes the focus data streams and calculates best focus for closed feedback loop control. Our system updates autofocus at 56 Hz (or once every 21 microm of stage travel) to collect sharply focused images sampled at 0.3x0.3 microm(2)/pixel at a stage speed of 2.3 mms. The system, tested by focusing in phase contrast and imaging long fluorescence strips, achieves high-performance closed-loop image-content-based autofocus in continuous scanning for the first time.

  20. DInSAR time series generation within a cloud computing environment: from ERS to Sentinel-1 scenario

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Elefante, Stefano; Imperatore, Pasquale; Lanari, Riccardo; Manunta, Michele; Zinno, Ivana; Mathot, Emmanuel; Brito, Fabrice; Farres, Jordi; Lengert, Wolfgang

    2013-04-01

    One of the techniques that will strongly benefit from the advent of the Sentinel-1 system is Differential SAR Interferometry (DInSAR), which has successfully demonstrated to be an effective tool to detect and monitor ground displacements with centimetre accuracy. The geoscience communities (volcanology, seismicity, …), as well as those related to hazard monitoring and risk mitigation, make extensively use of the DInSAR technique and they will take advantage from the huge amount of SAR data acquired by Sentinel-1. Indeed, such an information will successfully permit the generation of Earth's surface displacement maps and time series both over large areas and long time span. However, the issue of managing, processing and analysing the large Sentinel data stream is envisaged by the scientific community to be a major bottleneck, particularly during crisis phases. The emerging need of creating a common ecosystem in which data, results and processing tools are shared, is envisaged to be a successful way to address such a problem and to contribute to the information and knowledge spreading. The Supersites initiative as well as the ESA SuperSites Exploitation Platform (SSEP) and the ESA Cloud Computing Operational Pilot (CIOP) projects provide effective answers to this need and they are pushing towards the development of such an ecosystem. It is clear that all the current and existent tools for querying, processing and analysing SAR data are required to be not only updated for managing the large data stream of Sentinel-1 satellite, but also reorganized for quickly replying to the simultaneous and highly demanding user requests, mainly during emergency situations. This translates into the automatic and unsupervised processing of large amount of data as well as the availability of scalable, widely accessible and high performance computing capabilities. The cloud computing environment permits to achieve all of these objectives, particularly in case of spike and peak requests of processing resources linked to disaster events. This work aims at presenting a parallel computational model for the widely used DInSAR algorithm named as Small BAseline Subset (SBAS), which has been implemented within the cloud computing environment provided by the ESA-CIOP platform. This activity has resulted in developing a scalable, unsupervised, portable, and widely accessible (through a web portal) parallel DInSAR computational tool. The activity has rewritten and developed the SBAS application algorithm within a parallel system environment, i.e., in a form that allows us to benefit from multiple processing units. This requires the devising a parallel version of the SBAS algorithm and its subsequent implementation, implying additional complexity in algorithm designing and an efficient multi processor programming, with the final aim of a parallel performance optimization. Although the presented algorithm has been designed to work with Sentinel-1 data, it can also process other satellite SAR data (ERS, ENVISAT, CSK, TSX, ALOS). Indeed, the performance analysis of the implemented SBAS parallel version has been tested on the full ASAR archive (64 acquisitions) acquired over the Napoli Bay, a volcanic and densely urbanized area in Southern Italy. The full processing - from the raw data download to the generation of DInSAR time series - has been carried out by engaging 4 nodes, each one with 2 cores and 16 GB of RAM, and has taken about 36 hours, with respect to about 135 hours of the sequential version. Extensive analysis on other test areas significant from DInSAR and geophysical viewpoint will be presented. Finally, preliminary performance evaluation of the presented approach within the Sentinel-1 scenario will be provided.

  1. Simulation/Emulation Techniques: Compressing Schedules With Parallel (HW/SW) Development

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark L.; Hoang, June

    2014-01-01

    NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA's Kedalion engineering analysis lab has been validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA's heritage culture. Kedalion has validated many of the Orion HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, inserting new techniques and skills into the Multi - Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, Commercial-off-the-shelf (COTS) products, early rapid prototyping, in-house expertise and tools, and extensive use of simulators and emulators, NASA has achieved cost effective paradigms that are currently serving the Orion program effectively. Elements of long lead custom hardware on the Orion program have necessitated early use of simulators and emulators in advance of deliverable hardware to achieve parallel design and development on a compressed schedule.

  2. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345

  3. Quantitative high-efficiency cadmium-zinc-telluride SPECT with dedicated parallel-hole collimation system in obese patients: results of a multi-center study.

    PubMed

    Nakazato, Ryo; Slomka, Piotr J; Fish, Mathews; Schwartz, Ronald G; Hayes, Sean W; Thomson, Louise E J; Friedman, John D; Lemley, Mark; Mackin, Maria L; Peterson, Benjamin; Schwartz, Arielle M; Doran, Jesse A; Germano, Guido; Berman, Daniel S

    2015-04-01

    Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride parallel-hole SPECT MPI for coronary artery disease (CAD) in obese patients. 118 consecutive obese patients at three centers (BMI 43.6 ± 8.9 kg·m(-2), range 35-79.7 kg·m(-2)) had upright/supine HE-SPECT and invasive coronary angiography > 6 months (n = 67) or low likelihood of CAD (n = 51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD), and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5 = excellent; < 3 nondiagnostic) was compared among BMI 35-39.9 (n = 58), 40-44.9 (n = 24) and ≥45 (n = 36) groups. ROC curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P = .02). C-TPD normalcy rate was higher than U-TPD (88% vs 75%, P = .02). Mean IQ was similar among BMI 35-39.9, 40-44.9 and ≥45 groups [4.6 vs 4.4 vs 4.5, respectively (P = .6)]. No patient had a nondiagnostic stress scan. In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions.

  4. Quantitative High-Efficiency Cadmium-Zinc-Telluride SPECT with Dedicated Parallel-Hole Collimation System in Obese Patients: Results of a Multi-Center Study

    PubMed Central

    Nakazato, Ryo; Slomka, Piotr J.; Fish, Mathews; Schwartz, Ronald G.; Hayes, Sean W.; Thomson, Louise E.J.; Friedman, John D.; Lemley, Mark; Mackin, Maria L.; Peterson, Benjamin; Schwartz, Arielle M.; Doran, Jesse A.; Germano, Guido; Berman, Daniel S.

    2014-01-01

    Background Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride (CZT) parallel-hole SPECT-MPI for coronary artery disease (CAD) in obese patients. Methods and Results 118 consecutive obese patients at 3 centers (BMI 43.6±8.9 kg/m2, range 35–79.7 kg/m2) had upright/supine HE-SPECT and ICA >6 months (n=67) or low-likelihood of CAD (n=51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD) and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5=excellent; <3 nondiagnostic) was compared among BMI 35–39.9 (n=58), 40–44.9 (n=24) and ≥45 (n=36) groups. ROC-curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P=.02). C-TPD normalcy rate was higher than U-TPD (88% vs. 75%, P=.02). Mean IQ was similar among BMI 35–39.9, 40–44.9 and ≥45 groups [4.6 vs. 4.4 vs. 4.5, respectively (P=.6)]. No patient had a non-diagnostic stress scan. Conclusions In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions. PMID:25388380

  5. Water Selective Imaging and bSSFP Banding Artifact Correction in Humans and Small Animals at 3T and 7T, Respectively

    PubMed Central

    Ribot, Emeline J.; Wecker, Didier; Trotier, Aurélien J.; Dallaudière, Benjamin; Lefrançois, William; Thiaudière, Eric; Franconi, Jean-Michel; Miraux, Sylvain

    2015-01-01

    Introduction The purpose of this paper is to develop an easy method to generate both fat signal and banding artifact free 3D balanced Steady State Free Precession (bSSFP) images at high magnetic field. Methods In order to suppress fat signal and bSSFP banding artifacts, two or four images were acquired with the excitation frequency of the water-selective binomial radiofrequency pulse set On Resonance or shifted by a maximum of 3/4TR. Mice and human volunteers were imaged at 7T and 3T, respectively to perform whole-body and musculoskeletal imaging. “Sum-Of-Square” reconstruction was performed and combined or not with parallel imaging. Results The frequency selectivity of 1-2-3-2-1 or 1-3-3-1 binomial pulses was preserved after (3/4TR) frequency shifting. Consequently, whole body small animal 3D imaging was performed at 7T and enabled visualization of small structures within adipose tissue like lymph nodes. In parallel, this method allowed 3D musculoskeletal imaging in humans with high spatial resolution at 3T. The combination with parallel imaging allowed the acquisition of knee images with ~500μm resolution images in less than 2min. In addition, ankles, full head coverage and legs of volunteers were imaged, demonstrating the possible application of the method also for large FOV. Conclusion In conclusion, this robust method can be applied in small animals and humans at high magnetic fields. The high SNR and tissue contrast obtained in short acquisition times allows to prescribe bSSFP sequence for several preclinical and clinical applications. PMID:26426849

  6. Cancer heterogeneity: converting a limitation into a source of biologic information.

    PubMed

    Rübben, Albert; Araujo, Arturo

    2017-09-08

    Analysis of spatial and temporal genetic heterogeneity in human cancers has revealed that somatic cancer evolution in most cancers is not a simple linear process composed of a few sequential steps of mutation acquisitions and clonal expansions. Parallel evolution has been observed in many early human cancers resulting in genetic heterogeneity as well as multilineage progression. Moreover, aneuploidy as well as structural chromosomal aberrations seems to be acquired in a non-linear, punctuated mode where most aberrations occur at early stages of somatic cancer evolution. At later stages, the cancer genomes seem to get stabilized and acquire only few additional rearrangements. While parallel evolution suggests positive selection of driver mutations at early stages of somatic cancer evolution, stabilization of structural aberrations at later stages suggests that negative selection takes effect when cancer cells progressively lose their tolerance towards additional mutation acquisition. Mixing of genetically heterogeneous subclones in cancer samples reduces sensitivity of mutation detection. Moreover, driver mutations present only in a fraction of cancer cells are more likely to be mistaken for passenger mutations. Therefore, genetic heterogeneity may be considered a limitation negatively affecting detection sensitivity of driver mutations. On the other hand, identification of subclones and subclone lineages in human cancers may lead to a more profound understanding of the selective forces which shape somatic cancer evolution in human cancers. Identification of parallel evolution by analyzing spatial heterogeneity may hint to driver mutations which might represent additional therapeutic targets besides driver mutations present in a monoclonal state. Likewise, stabilization of cancer genomes which can be identified by analyzing temporal genetic heterogeneity might hint to genes and pathways which have become essential for survival of cancer cell lineages at later stages of cancer evolution. These genes and pathways might also constitute patient specific therapeutic targets.

  7. Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI

    PubMed Central

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2017-01-01

    Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329

  8. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  9. Enhancing instruction scheduling with a block-structured ISA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melvin, S.; Patt, Y.

    It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less

  10. Combining factual and heuristic knowledge in knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Hull, Richard; Karr, Clark; Hosken, Bruce; Verhagen, William

    1992-01-01

    A knowledge acquisition technique that combines heuristic and factual knowledge represented as two hierarchies is described. These ideas were applied to the construction of a knowledge acquisition interface to the Expert System Analyst (OPERA). The goal of OPERA is to improve the operations support of the computer network in the space shuttle launch processing system. The knowledge acquisition bottleneck lies in gathering knowledge from human experts and transferring it to OPERA. OPERA's knowledge acquisition problem is approached as a classification problem-solving task, combining this approach with the use of factual knowledge about the domain. The interface was implemented in a Symbolics workstation making heavy use of windows, pull-down menus, and other user-friendly devices.

  11. Open-loop frequency acquisition for suppressed-carrier biphase signals using one-pole arm filters

    NASA Technical Reports Server (NTRS)

    Shah, B.; Holmes, J. K.

    1991-01-01

    Open loop frequency acquisition performance is discussed for suppressed carrier binary phase shift keyed signals in terms of the probability of detecting the carrier frequency offset when the arms of the Costas loop detector have one pole filters. The approach, which does not require symbol timing, uses fast Fourier transforms (FFTs) to detect the carrier frequency offset. The detection probability, which depends on both the 3 dB arm filter bandwidth and the received symbol signal to noise ratio, is derived and is shown to be independent of symbol timing. It is shown that the performance of this technique is slightly better that other open loop acquisition techniques which use integrators in the arms and whose detection performance varies with symbol timing.

  12. New Materials, Techniques and Device Concepts for Organic NLO Chromophore-based Electrooptic Devices. Part 1

    DTIC Science & Technology

    2006-08-23

    polarization the electric field vector is parallel to the substrate, for TM polarization the magnetic field vector is parallel to the substrate. Figure...section can be obtained for the case of the two electromagnetic field polarization vectors λ and µ describing the two photons being absorbed (of the same or... polarization effects on two-photon absorption as investigated by the technique of thermal lensing detected absorption of a mode- locked laser beam. This

  13. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    PubMed

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  14. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  15. Breast MRI in community practice: equipment and imaging techniques at facilities in the Breast Cancer Surveillance Consortium.

    PubMed

    DeMartini, Wendy B; Ichikawa, Laura; Yankaskas, Bonnie C; Buist, Diana; Kerlikowske, Karla; Geller, Berta; Onega, Tracy; Rosenberg, Robert D; Lehman, Constance D

    2010-11-01

    MRI is increasingly used for the detection of breast carcinoma. Little is known about breast MRI techniques among community practice facilities. The aim of this study was to evaluate equipment and acquisition techniques used by community facilities across the United States, including compliance with minimum standards by the ACRIN® 6667 Trial and the European Society of Breast Imaging. Breast Cancer Surveillance Consortium facilities performing breast MRI were identified and queried by survey regarding breast MRI equipment and technical parameters. Variables included scanner field strength, coil type, acquisition coverage, slice thickness, and the timing of the initial postcontrast sequence. Results were tallied and percentages of facilities meeting ACRIN® and European Society of Breast Imaging standards were calculated. From 23 facilities performing breast MRI, results were obtained from 14 (61%) facilities with 16 MRI scanners reporting 18 imaging parameters. Compliance with equipment recommendations of ≥1.5-T field strength was 94% and of a dedicated breast coil was 100%. Eighty-three percent of acquisitions used bilateral postcontrast techniques, and 78% used slice thickness≤3 mm. The timing of initial postcontrast sequences ranged from 58 seconds to 8 minutes 30 seconds, with 63% meeting recommendations for completion within 4 minutes. Nearly all surveyed facilities met ACRIN and European Society of Breast Imaging standards for breast MRI equipment. The majority met standards for acquisition parameters, although techniques varied, in particular for the timing of initial postcontrast imaging. Further guidelines by the ACR Breast MRI Accreditation Program will be of importance in facilitating standardized and high-quality breast MRI. Copyright © 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  16. Auditory Processing Disorder and Foreign Language Acquisition

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  17. RESCUE - Reduction of MRI SNR Degradation by Using an MR-Synchronous Low-Interference PET Acquisition Technique

    NASA Astrophysics Data System (ADS)

    Gebhardt, Pierre; Wehner, Jakob; Weissler, Bjoern; Frach, Thomas; Marsden, Paul K.; Schulz, Volkmar

    2015-06-01

    Devices aiming at combined Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) to enable simultaneous PET/MR image acquisition have to fulfill demanding requirements to avoid mutual magneticas well as electromagnetic-field-related interferences which lead to image quality degradation. Particularly Radio-Frequency (RF)-field-related interferences between PET and MRI may lead to MRI SNR reduction, thereby deteriorating MR image quality. RF shielding of PET electronics is therefore commonly applied to reduce RF emission and lower the potential coupling into MRI RF coil(s). However, shields introduce eddy-current-induced MRI field distortions and should thus be minimized or ideally omitted. Although the MRI noise floor increase caused by a PET system might be acceptable for many MRI applications, some MRI protocols, such as fast or high-resolution MRI scans, typically suffer from low SNR and might need more attention regarding RF silence to preserve the intrinsic MRI SNR. For such cases, we propose RESCUE, an MRI-synchronously-gated PET data acquisition technique: By interrupting the PET acquisition during MR signal receive phases, PET-related RF emission may be minimized, leading to MRI SNR preservation. Our PET insert Hyperion IID using Philips Digital Photon Counting (DPC) sensors serves as the platform to demonstrate RESCUE. To make the DPC sensor suitable for RESCUE to be applied for many MRI sequences with acquisition time windows in the range of a few milliseconds, we present in this paper a new technique which enables rapid DPC sensor operation interruption by dramatically lowering the overhead time to interrupt and restart the sensor operation. Procedures to enter and leave gated PET data acquisition may imply sensitivity losses which add to the ones occurring during MRI RF acquisition. For the case of our PET insert, the new DPC quick-interruption technique yields a PET sensitivity loss reduction by a factor of 78 when compared to the loss introduced with the standard start/stop procedure. For instance, PET sensitivity losses related to overhead time are 2.9% in addition to the loss related to PET gating being equal to the MRI RF acquisition duty cycle (14.7%) for an exemplary T1-weighted 3D-FFE MRI sequence. MRI SNR measurement results obtained with one Singles Detection Module (SDM) using no RF shield demonstrate a noise floor reduction by a factor of 2.1, getting close to the noise floor level of the SNR reference scan (SDM off-powered) when RESCUE was active.

  18. Validation of nonlinear interferometric vibrational imaging as a molecular OCT technique by the use of Raman microscopy

    NASA Astrophysics Data System (ADS)

    Benalcazar, Wladimir A.; Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B.; Boppart, Stephen A.

    2009-02-01

    We validate a molecular imaging technique called Nonlinear Interferometric Vibrational Imaging (NIVI) by comparing vibrational spectra with those acquired from Raman microscopy. This broadband coherent anti-Stokes Raman scattering (CARS) technique uses heterodyne detection and OCT acquisition and design principles to interfere a CARS signal generated by a sample with a local oscillator signal generated separately by a four-wave mixing process. These are mixed and demodulated by spectral interferometry. Its confocal configuration allows the acquisition of 3D images based on endogenous molecular signatures. Images from both phantom and mammary tissues have been acquired by this instrument and its spectrum is compared with its spontaneous Raman signatures.

  19. Mitigation of tropospheric InSAR phase artifacts through differential multisquint processing

    NASA Technical Reports Server (NTRS)

    Chen, Curtis W.

    2004-01-01

    We propose a technique for mitigating tropospheric phase errors in repeat-pass interferometric synthetic aperture radar (InSAR). The mitigation technique is based upon the acquisition of multisquint InSAR data. On each satellite pass over a target area, the radar instrument will acquire images from multiple squint (azimuth) angles, from which multiple interferograms can be formed. The diversity of viewing angles associated with the multisquint acquisition can be used to solve for two components of the 3-D surface displacement vector as well as for the differential tropospheric phase. We describe a model for the performance of the multisquint technique, and we present an assessment of the performance expected.

  20. High-energy physics software parallelization using database techniques

    NASA Astrophysics Data System (ADS)

    Argante, E.; van der Stok, P. D. V.; Willers, I.

    1997-02-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.

  1. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  2. War-gaming application for future space systems acquisition: MATLAB implementation of war-gaming acquisition models and simulation results

    NASA Astrophysics Data System (ADS)

    Vienhage, Paul; Barcomb, Heather; Marshall, Karel; Black, William A.; Coons, Amanda; Tran, Hien T.; Nguyen, Tien M.; Guillen, Andy T.; Yoh, James; Kizer, Justin; Rogers, Blake A.

    2017-05-01

    The paper describes the MATLAB (MathWorks) programs that were developed during the REU workshop1 to implement The Aerospace Corporation developed Unified Game-based Acquisition Framework and Advanced Game - based Mathematical Framework (UGAF-AGMF) and its associated War-Gaming Engine (WGE) models. Each game can be played from the perspectives of the Department of Defense Acquisition Authority (DAA) or of an individual contractor (KTR). The programs also implement Aerospace's optimum "Program and Technical Baseline (PTB) and associated acquisition" strategy that combines low Total Ownership Cost (TOC) with innovative designs while still meeting warfighter needs. The paper also describes the Bayesian Acquisition War-Gaming approach using Monte Carlo simulations, a numerical analysis technique to account for uncertainty in decision making, which simulate the PTB development and acquisition processes and will detail the procedure of the implementation and the interactions between the games.

  3. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrisochoides, N.; Sukup, F.

    In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less

  5. Application of interactive computer graphics in wind-tunnel dynamic model testing

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Hammond, C. E.

    1975-01-01

    The computer-controlled data-acquisition system recently installed for use with a transonic dynamics tunnel was described. This includes a discussion of the hardware/software features of the system. A subcritical response damping technique, called the combined randomdec/moving-block method, for use in windtunnel-model flutter testing, that has been implemented on the data-acquisition system, is described in some detail. Some results using the method are presented and the importance of using interactive graphics in applying the technique in near real time during wind-tunnel test operations is discussed.

  6. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    NASA Astrophysics Data System (ADS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K.

    2012-02-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  7. SU-C-206-07: A Practical Sparse View Ultra-Low Dose CT Acquisition Scheme for PET Attenuation Correction in the Extended Scan Field-Of-View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, J; Fan, J; Gopinatha Pillai, A

    Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less

  8. The influence of hand positions on biomechanical injury risk factors at the wrist joint during the round-off skills in female gymnastics.

    PubMed

    Farana, Roman; Jandacka, Daniel; Uchytil, Jaroslav; Zahradnik, David; Irwin, Gareth

    2017-01-01

    The aim of this study was to examine the biomechanical injury risk factors at the wrist, including joint kinetics, kinematics and stiffness in the first and second contact limb for parallel and T-shape round-off (RO) techniques. Seven international-level female gymnasts performed 10 trials of the RO to back handspring with parallel and T-shape hand positions. Synchronised kinematic (3D motion analysis system; 247 Hz) and kinetic (two force plates; 1235 Hz) data were collected for each trial. A two-way repeated measure analysis of variance (ANOVA) assessed differences in the kinematic and kinetic parameters between the techniques for each contact limb. The main findings highlighted that in both the RO techniques, the second contact limb wrist joint is exposed to higher mechanical loads than the first contact limb demonstrated by increased axial compression force and loading rate. In the parallel technique, the second contact limb wrist joint is exposed to higher axial compression load. Differences between wrist joint kinetics highlight that the T-shape technique may potentially lead to reducing these bio-physical loads and consequently protect the second contact limb wrist joint from overload and biological failure. Highlighting the biomechanical risk factors facilitates the process of technique selection making more objective and safe.

  9. Implementation of ADI: Schemes on MIMD parallel computers

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1993-01-01

    In order to simulate the effects of the impingement of hot exhaust jets of High Performance Aircraft on landing surfaces a multi-disciplinary computation coupling flow dynamics to heat conduction in the runway needs to be carried out. Such simulations, which are essentially unsteady, require very large computational power in order to be completed within a reasonable time frame of the order of an hour. Such power can be furnished by the latest generation of massively parallel computers. These remove the bottleneck of ever more congested data paths to one or a few highly specialized central processing units (CPU's) by having many off-the-shelf CPU's work independently on their own data, and exchange information only when needed. During the past year the first phase of this project was completed, in which the optimal strategy for mapping an ADI-algorithm for the three dimensional unsteady heat equation to a MIMD parallel computer was identified. This was done by implementing and comparing three different domain decomposition techniques that define the tasks for the CPU's in the parallel machine. These implementations were done for a Cartesian grid and Dirichlet boundary conditions. The most promising technique was then used to implement the heat equation solver on a general curvilinear grid with a suite of nontrivial boundary conditions. Finally, this technique was also used to implement the Scalar Penta-diagonal (SP) benchmark, which was taken from the NAS Parallel Benchmarks report. All implementations were done in the programming language C on the Intel iPSC/860 computer.

  10. Comparison of two tension-band fixation materials and techniques in transverse patella fractures: a biomechanical study.

    PubMed

    Rabalais, R David; Burger, Evalina; Lu, Yun; Mansour, Alfred; Baratta, Richard V

    2008-02-01

    This study compared the biomechanical properties of 2 tension-band techniques with stainless steel wire and ultra high molecular weight polyethylene (UHMWPE) cable in a patella fracture model. Transverse patella fractures were simulated in 8 cadaver knees and fixated with figure-of-8 and parallel wire configurations in combination with Kirschner wires. Identical configurations were tested with UHMWPE cable. Specimens were mounted to a testing apparatus and the quadriceps was used to extend the knees from 90 degrees to 0 degrees; 4 knees were tested under monotonic loading, and 4 knees were tested under cyclic loading. Under monotonic loading, average fracture gap was 0.50 and 0.57 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.16 and 0.04 mm, respectively, in the parallel wire construct. Under cyclic loading, average fracture gap was 1.45 and 1.66 mm for steel wire and UHMWPE cable, respectively, in the figure-of-8 construct compared with 0.45 and 0.60 mm, respectively, in the parallel wire construct. A statistically significant effect of technique was found, with the parallel wire construct performing better than the figure-of-8 construct in both loading models. There was no effect of material or interaction. In this biomechanical model, parallel wires performed better than the figure-of-8 configuration in both loading regimens, and UHMWPE cable performed similarly to 18-gauge steel wire.

  11. Data Acquisition and Linguistic Resources

    NASA Astrophysics Data System (ADS)

    Strassel, Stephanie; Christianson, Caitlin; McCary, John; Staderman, William; Olive, Joseph

    All human language technology demands substantial quantities of data for system training and development, plus stable benchmark data to measure ongoing progress. While creation of high quality linguistic resources is both costly and time consuming, such data has the potential to profoundly impact not just a single evaluation program but language technology research in general. GALE's challenging performance targets demand linguistic data on a scale and complexity never before encountered. Resources cover multiple languages (Arabic, Chinese, and English) and multiple genres -- both structured (newswire and broadcast news) and unstructured (web text, including blogs and newsgroups, and broadcast conversation). These resources include significant volumes of monolingual text and speech, parallel text, and transcribed audio combined with multiple layers of linguistic annotation, ranging from word aligned parallel text and Treebanks to rich semantic annotation.

  12. 48 CFR 15.101-2 - Lowest price technically acceptable source selection process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Lowest price technically acceptable source selection process. 15.101-2 Section 15.101-2 Federal Acquisition Regulations System FEDERAL... Processes and Techniques 15.101-2 Lowest price technically acceptable source selection process. (a) The...

  13. Literacy for Migrants: An Ethnography of Literacy Acquisition among Nomads of Kutch.

    ERIC Educational Resources Information Center

    Dyer, Caroline; Choksi, Archana

    1997-01-01

    Recounts the creation of an ethnographic study of literacy acquisition among nomadic pastoralists in Gujarat (India). Describes the project's goals and methods, research problems, and experiments with Regenerated Freirean Literacy Through Empowering Community Techniques (REFLECT) methods for literacy learning. Concludes with suggestions for future…

  14. IRTs of the ABCs: Children's Letter Name Acquisition

    ERIC Educational Resources Information Center

    Phillips, Beth M.; Piasta, Shayne B.; Anthony, Jason L.; Lonigan, Christopher J.; Francis, David J.

    2012-01-01

    We examined the developmental sequence of letter name knowledge acquisition by children from 2 to five years of age. Data from 2 samples representing diverse regions, ethnicity, and socioeconomic backgrounds (ns=1074 and 500) were analyzed using item response theory (IRT) and differential item functioning techniques. Results from factor analyses…

  15. Interlanguage Phonology: Acquisition of Timing Control in Japanese.

    ERIC Educational Resources Information Center

    Toda, Takako

    1994-01-01

    Studies the acquisition of timing control by Australians enrolled in first-year Japanese. Instrumental techniques are used to observe segment duration and pitch patterns in the speech production of learners and native speakers. Results indicate the learners can control timing, but their phonetic realization differs from that of native speakers.…

  16. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  17. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  18. 48 CFR 852.273-71 - Alternative negotiation techniques.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Alternative negotiation....273-71 Alternative negotiation techniques. As prescribed in 873.110(b), insert the following provision: Alternative Negotiation Techniques (JAN 2003) The contracting officer may elect to use the alternative...

  19. 48 CFR 852.273-71 - Alternative negotiation techniques.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Alternative negotiation....273-71 Alternative negotiation techniques. As prescribed in 873.110(b), insert the following provision: Alternative Negotiation Techniques (JAN 2003) The contracting officer may elect to use the alternative...

  20. 48 CFR 852.273-71 - Alternative negotiation techniques.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Alternative negotiation....273-71 Alternative negotiation techniques. As prescribed in 873.110(b), insert the following provision: Alternative Negotiation Techniques (JAN 2003) The contracting officer may elect to use the alternative...

Top