Sample records for parallel high resolution

  1. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  2. High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and Massively Parallel Sequencing

    DTIC Science & Technology

    2010-10-14

    High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and Massively Parallel Sequencing...Venezuelan equine encephalitis virus (VEEV) genome. We initially used a capillary electrophoresis method to gain insight into the role of the VEEV...Smith JM, Schmaljohn CS (2010) High-Resolution Functional Mapping of the Venezuelan Equine Encephalitis Virus Genome by Insertional Mutagenesis and

  3. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  4. A Parallel, Multi-Scale Watershed-Hydrologic-Inundation Model with Adaptively Switching Mesh for Capturing Flooding and Lake Dynamics

    NASA Astrophysics Data System (ADS)

    Ji, X.; Shen, C.

    2017-12-01

    Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.

  5. Resolutions of the Coulomb operator: VIII. Parallel implementation using the modern programming language X10.

    PubMed

    Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P

    2014-10-30

    Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.

  6. Parallel Reaction Monitoring: A Targeted Experiment Performed Using High Resolution and High Mass Accuracy Mass Spectrometry

    PubMed Central

    Rauniyar, Navin

    2015-01-01

    The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379

  7. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  8. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  9. Determination of accurate 1H positions of an alanine tripeptide with anti-parallel and parallel β-sheet structures by high resolution 1H solid state NMR and GIPAW chemical shift calculation.

    PubMed

    Yazawa, Koji; Suzuki, Furitsu; Nishiyama, Yusuke; Ohata, Takuya; Aoki, Akihiro; Nishimura, Katsuyuki; Kaji, Hironori; Shimizu, Tadashi; Asakura, Tetsuo

    2012-11-25

    The accurate (1)H positions of alanine tripeptide, A(3), with anti-parallel and parallel β-sheet structures could be determined by highly resolved (1)H DQMAS solid-state NMR spectra and (1)H chemical shift calculation with gauge-including projector augmented wave calculations.

  10. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  11. Parallel Computation and Visualization of Three-dimensional, Time-dependent, Thermal Convective Flows

    NASA Technical Reports Server (NTRS)

    Wang, P.; Li, P.

    1998-01-01

    A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.

  12. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  13. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  14. Development of high-resolution x-ray CT system using parallel beam geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoneyama, Akio, E-mail: akio.yoneyama.bu@hitachi.com; Baba, Rika; Hyodo, Kazuyuki

    2016-01-28

    For fine three-dimensional observations of large biomedical and organic material samples, we developed a high-resolution X-ray CT system. The system consists of a sample positioner, a 5-μm scintillator, microscopy lenses, and a water-cooled sCMOS detector. Parallel beam geometry was adopted to attain a field of view of a few mm square. A fine three-dimensional image of birch branch was obtained using a 9-keV X-ray at BL16XU of SPring-8 in Japan. The spatial resolution estimated from the line profile of a sectional image was about 3 μm.

  15. Parallelization and Algorithmic Enhancements of High Resolution IRAS Image Construction

    NASA Technical Reports Server (NTRS)

    Cao, Yu; Prince, Thomas A.; Tereby, Susan; Beichman, Charles A.

    1996-01-01

    The Infrared Astronomical Satellite caried out a nearly complete survey of the infrared sky, and the survey data are important for the study of many astrophysical phenomena. However, many data sets at other wavelengths have higher resolutions than that of the co-added IRAS maps, and high resolution IRAS images are strongly desired both for their own information content and their usefulness in correlation. The HIRES program was developed by the Infrared Processing and Analysis Center (IPAC) to produce high resolution (approx. 1') images from IRAS data using the Maximum Correlation Method (MCM). We describe the port of HIRES to the Intel Paragon, a massively parallel supercomputer, other software developments for mass production of HIRES images, and the IRAS Galaxy Atlas, a project to map the Galactic plane at 60 and 100(micro)m.

  16. Fiber optic cable-based high-resolution, long-distance VGA extenders

    NASA Astrophysics Data System (ADS)

    Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon

    2013-02-01

    Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.

  17. Implementation of parallel transmit beamforming using orthogonal frequency division multiplexing--achievable resolution and interbeam interference.

    PubMed

    Demi, Libertario; Viti, Jacopo; Kusters, Lieneke; Guidi, Francesco; Tortoli, Piero; Mischi, Massimo

    2013-11-01

    The speed of sound in the human body limits the achievable data acquisition rate of pulsed ultrasound scanners. To overcome this limitation, parallel beamforming techniques are used in ultrasound 2-D and 3-D imaging systems. Different parallel beamforming approaches have been proposed. They may be grouped into two major categories: parallel beamforming in reception and parallel beamforming in transmission. The first category is not optimal for harmonic imaging; the second category may be more easily applied to harmonic imaging. However, inter-beam interference represents an issue. To overcome these shortcomings and exploit the benefit of combining harmonic imaging and high data acquisition rate, a new approach has been recently presented which relies on orthogonal frequency division multiplexing (OFDM) to perform parallel beamforming in transmission. In this paper, parallel transmit beamforming using OFDM is implemented for the first time on an ultrasound scanner. An advanced open platform for ultrasound research is used to investigate the axial resolution and interbeam interference achievable with parallel transmit beamforming using OFDM. Both fundamental and second-harmonic imaging modalities have been considered. Results show that, for fundamental imaging, axial resolution in the order of 2 mm can be achieved in combination with interbeam interference in the order of -30 dB. For second-harmonic imaging, axial resolution in the order of 1 mm can be achieved in combination with interbeam interference in the order of -35 dB.

  18. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  19. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  20. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  1. Design and Performance of a 1 ms High-Speed Vision Chip with 3D-Stacked 140 GOPS Column-Parallel PEs †.

    PubMed

    Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi

    2018-04-24

    We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.

  2. Improving the spatial accuracy in functional magnetic resonance imaging (fMRI) based on the blood oxygenation level dependent (BOLD) effect: benefits from parallel imaging and a 32-channel head array coil at 1.5 Tesla.

    PubMed

    Fellner, C; Doenitz, C; Finkenzeller, T; Jung, E M; Rennert, J; Schlaier, J

    2009-01-01

    Geometric distortions and low spatial resolution are current limitations in functional magnetic resonance imaging (fMRI). The aim of this study was to evaluate if application of parallel imaging or significant reduction of voxel size in combination with a new 32-channel head array coil can reduce those drawbacks at 1.5 T for a simple hand motor task. Therefore, maximum t-values (tmax) in different regions of activation, time-dependent signal-to-noise ratios (SNR(t)) as well as distortions within the precentral gyrus were evaluated. Comparing fMRI with and without parallel imaging in 17 healthy subjects revealed significantly reduced geometric distortions in anterior-posterior direction. Using parallel imaging, tmax only showed a mild reduction (7-11%) although SNR(t) was significantly diminished (25%). In 7 healthy subjects high-resolution (2 x 2 x 2 mm3) fMRI was compared with standard fMRI (3 x 3 x 3 mm3) in a 32-channel coil and with high-resolution fMRI in a 12-channel coil. The new coil yielded a clear improvement for tmax (21-32%) and SNR(t) (51%) in comparison with the 12-channel coil. Geometric distortions were smaller due to the smaller voxel size. Therefore, the reduction in tmax (8-16%) and SNR(t) (52%) in the high-resolution experiment seems to be tolerable with this coil. In conclusion, parallel imaging is an alternative to reduce geometric distortions in fMRI at 1.5 T. Using a 32-channel coil, reduction of the voxel size might be the preferable way to improve spatial accuracy.

  3. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  4. Heterodyne frequency-domain multispectral diffuse optical tomography of breast cancer in the parallel-plane transmission geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ban, H. Y.; Kavuri, V. C., E-mail: venk@physics.up

    Purpose: The authors introduce a state-of-the-art all-optical clinical diffuse optical tomography (DOT) imaging instrument which collects spatially dense, multispectral, frequency-domain breast data in the parallel-plate geometry. Methods: The instrument utilizes a CCD-based heterodyne detection scheme that permits massively parallel detection of diffuse photon density wave amplitude and phase for a large number of source–detector pairs (10{sup 6}). The stand-alone clinical DOT instrument thus offers high spatial resolution with reduced crosstalk between absorption and scattering. Other novel features include a fringe profilometry system for breast boundary segmentation, real-time data normalization, and a patient bed design which permits both axial and sagittalmore » breast measurements. Results: The authors validated the instrument using tissue simulating phantoms with two different chromophore-containing targets and one scattering target. The authors also demonstrated the instrument in a case study breast cancer patient; the reconstructed 3D image of endogenous chromophores and scattering gave tumor localization in agreement with MRI. Conclusions: Imaging with a novel parallel-plate DOT breast imager that employs highly parallel, high-resolution CCD detection in the frequency-domain was demonstrated.« less

  5. Anti-parallel EUV Flows Observed along Active Region Filament Threads with Hi-C

    NASA Astrophysics Data System (ADS)

    Alexander, Caroline E.; Walsh, Robert W.; Régnier, Stéphane; Cirtain, Jonathan; Winebarger, Amy R.; Golub, Leon; Kobayashi, Ken; Platt, Simon; Mitchell, Nick; Korreck, Kelly; DePontieu, Bart; DeForest, Craig; Weber, Mark; Title, Alan; Kuzin, Sergey

    2013-09-01

    Plasma flows within prominences/filaments have been observed for many years and hold valuable clues concerning the mass and energy balance within these structures. Previous observations of these flows primarily come from Hα and cool extreme-ultraviolet (EUV) lines (e.g., 304 Å) where estimates of the size of the prominence threads has been limited by the resolution of the available instrumentation. Evidence of "counter-steaming" flows has previously been inferred from these cool plasma observations, but now, for the first time, these flows have been directly imaged along fundamental filament threads within the million degree corona (at 193 Å). In this work, we present observations of an AR filament observed with the High-resolution Coronal Imager (Hi-C) that exhibits anti-parallel flows along adjacent filament threads. Complementary data from the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager are presented. The ultra-high spatial and temporal resolution of Hi-C allow the anti-parallel flow velocities to be measured (70-80 km s-1) and gives an indication of the resolvable thickness of the individual strands (0.''8 ± 0.''1). The temperature of the plasma flows was estimated to be log T (K) = 5.45 ± 0.10 using Emission Measure loci analysis. We find that SDO/AIA cannot clearly observe these anti-parallel flows or measure their velocity or thread width due to its larger pixel size. We suggest that anti-parallel/counter-streaming flows are likely commonplace within all filaments and are currently not observed in EUV due to current instrument spatial resolution.

  6. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  7. Time-Resolved 3D Quantitative Flow MRI of the Major Intracranial Vessels: Initial Experience and Comparative Evaluation at 1.5T and 3.0T in Combination With Parallel Imaging

    PubMed Central

    Bammer, Roland; Hope, Thomas A.; Aksoy, Murat; Alley, Marcus T.

    2012-01-01

    Exact knowledge of blood flow characteristics in the major cerebral vessels is of great relevance for diagnosing cerebrovascular abnormalities. This involves the assessment of hemodynamically critical areas as well as the derivation of biomechanical parameters such as wall shear stress and pressure gradients. A time-resolved, 3D phase-contrast (PC) MRI method using parallel imaging was implemented to measure blood flow in three dimensions at multiple instances over the cardiac cycle. The 4D velocity data obtained from 14 healthy volunteers were used to investigate dynamic blood flow with the use of multiplanar reformatting, 3D streamlines, and 4D particle tracing. In addition, the effects of magnetic field strength, parallel imaging, and temporal resolution on the data were investigated in a comparative evaluation at 1.5T and 3T using three different parallel imaging reduction factors and three different temporal resolutions in eight of the 14 subjects. Studies were consistently performed faster at 3T than at 1.5T because of better parallel imaging performance. A high temporal resolution (65 ms) was required to follow dynamic processes in the intracranial vessels. The 4D flow measurements provided a high degree of vascular conspicuity. Time-resolved streamline analysis provided features that have not been reported previously for the intracranial vasculature. PMID:17195166

  8. Flow chemistry and polymer-supported pseudoenantiomeric acylating agents enable parallel kinetic resolution of chiral saturated N-heterocycles

    NASA Astrophysics Data System (ADS)

    Kreituss, Imants; Bode, Jeffrey W.

    2017-05-01

    Kinetic resolution is a common method to obtain enantioenriched material from a racemic mixture. This process will deliver enantiopure unreacted material when the selectivity factor of the process, s, is greater than 1; however, the scalemic reaction product is often discarded. Parallel kinetic resolution, on the other hand, provides access to two enantioenriched products from a single racemic starting material, but suffers from a variety of practical challenges regarding experimental design that limit its applications. Here, we describe the development of a flow-based system that enables practical parallel kinetic resolution of saturated N-heterocycles. This process provides access to both enantiomers of the starting material in good yield and high enantiopurity; similar results with classical kinetic resolution would require selectivity factors in the range of s = 100. To achieve this, two immobilized quasienantiomeric acylating agents were designed for the asymmetric acylation of racemic saturated N-heterocycles. Using the flow-based system we could efficiently separate, recover and reuse the polymer-supported reagents. The amide products could be readily separated and hydrolysed to the corresponding amines without detectable epimerization.

  9. Enhanced Axial Resolution of Wide-Field Two-Photon Excitation Microscopy by Line Scanning Using a Digital Micromirror Device.

    PubMed

    Park, Jong Kang; Rowlands, Christopher J; So, Peter T C

    2017-01-01

    Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice.

  10. Enhanced Axial Resolution of Wide-Field Two-Photon Excitation Microscopy by Line Scanning Using a Digital Micromirror Device

    PubMed Central

    Park, Jong Kang; Rowlands, Christopher J.; So, Peter T. C.

    2017-01-01

    Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice. PMID:29387484

  11. ANTI-PARALLEL EUV FLOWS OBSERVED ALONG ACTIVE REGION FILAMENT THREADS WITH HI-C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Caroline E.; Walsh, Robert W.; Régnier, Stéphane

    Plasma flows within prominences/filaments have been observed for many years and hold valuable clues concerning the mass and energy balance within these structures. Previous observations of these flows primarily come from Hα and cool extreme-ultraviolet (EUV) lines (e.g., 304 Å) where estimates of the size of the prominence threads has been limited by the resolution of the available instrumentation. Evidence of 'counter-steaming' flows has previously been inferred from these cool plasma observations, but now, for the first time, these flows have been directly imaged along fundamental filament threads within the million degree corona (at 193 Å). In this work, wemore » present observations of an AR filament observed with the High-resolution Coronal Imager (Hi-C) that exhibits anti-parallel flows along adjacent filament threads. Complementary data from the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager are presented. The ultra-high spatial and temporal resolution of Hi-C allow the anti-parallel flow velocities to be measured (70-80 km s{sup –1}) and gives an indication of the resolvable thickness of the individual strands (0.''8 ± 0.''1). The temperature of the plasma flows was estimated to be log T (K) = 5.45 ± 0.10 using Emission Measure loci analysis. We find that SDO/AIA cannot clearly observe these anti-parallel flows or measure their velocity or thread width due to its larger pixel size. We suggest that anti-parallel/counter-streaming flows are likely commonplace within all filaments and are currently not observed in EUV due to current instrument spatial resolution.« less

  12. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  13. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  14. Creating, Storing, and Dumping Low and High Resolution Graphics on the Apple IIe Microcomputer System.

    ERIC Educational Resources Information Center

    Fletcher, Richard K., Jr.

    This description of procedures for dumping high and low resolution graphics using the Apple IIe microcomputer system focuses on two special hardware configurations that are commonly used in schools--the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Special…

  15. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  16. In-situ Isotopic Analysis at Nanoscale using Parallel Ion Electron Spectrometry: A Powerful New Paradigm for Correlative Microscopy

    NASA Astrophysics Data System (ADS)

    Yedra, Lluís; Eswara, Santhana; Dowsett, David; Wirtz, Tom

    2016-06-01

    Isotopic analysis is of paramount importance across the entire gamut of scientific research. To advance the frontiers of knowledge, a technique for nanoscale isotopic analysis is indispensable. Secondary Ion Mass Spectrometry (SIMS) is a well-established technique for analyzing isotopes, but its spatial-resolution is fundamentally limited. Transmission Electron Microscopy (TEM) is a well-known method for high-resolution imaging down to the atomic scale. However, isotopic analysis in TEM is not possible. Here, we introduce a powerful new paradigm for in-situ correlative microscopy called the Parallel Ion Electron Spectrometry by synergizing SIMS with TEM. We demonstrate this technique by distinguishing lithium carbonate nanoparticles according to the isotopic label of lithium, viz. 6Li and 7Li and imaging them at high-resolution by TEM, adding a new dimension to correlative microscopy.

  17. A mirror for lab-based quasi-monochromatic parallel x-rays

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanhhai; Lu, Xun; Lee, Chang Jun; Jung, Jin-Ho; Jin, Gye-Hwan; Kim, Sung Youb; Jeon, Insu

    2014-09-01

    A multilayered parabolic mirror with six W/Al bilayers was designed and fabricated to generate monochromatic parallel x-rays using a lab-based x-ray source. Using this mirror, curved bright bands were obtained in x-ray images as reflected x-rays. The parallelism of the reflected x-rays was investigated using the shape of the bands. The intensity and monochromatic characteristics of the reflected x-rays were evaluated through measurements of the x-ray spectra in the band. High intensity, nearly monochromatic, and parallel x-rays, which can be used for high resolution x-ray microscopes and local radiation therapy systems, were obtained.

  18. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  19. Novel 16-channel receive coil array for accelerated upper airway MRI at 3 Tesla.

    PubMed

    Kim, Yoon-Chul; Hayes, Cecil E; Narayanan, Shrikanth S; Nayak, Krishna S

    2011-06-01

    Upper airway MRI can provide a noninvasive assessment of speech and swallowing disorders and sleep apnea. Recent work has demonstrated the value of high-resolution three-dimensional imaging and dynamic two-dimensional imaging and the importance of further improvements in spatio-temporal resolution. The purpose of the study was to describe a novel 16-channel 3 Tesla receive coil that is highly sensitive to the human upper airway and investigate the performance of accelerated upper airway MRI with the coil. In three-dimensional imaging of the upper airway during static posture, 6-fold acceleration is demonstrated using parallel imaging, potentially leading to capturing a whole three-dimensional vocal tract with 1.25 mm isotropic resolution within 9 sec of sustained sound production. Midsagittal spiral parallel imaging of vocal tract dynamics during natural speech production is demonstrated with 2 × 2 mm(2) in-plane spatial and 84 ms temporal resolution. Copyright © 2010 Wiley-Liss, Inc.

  20. Localized high-resolution DTI of the human midbrain using single-shot EPI, parallel imaging, and outer-volume suppression at 7 T

    PubMed Central

    Wargo, Christopher J.; Gore, John C.

    2013-01-01

    Localized high-resolution diffusion tensor images (DTI) from the midbrain were obtained using reduced field-of-view (rFOV) methods combined with SENSE parallel imaging and single-shot echo planar (EPI) acquisitions at 7 T. This combination aimed to diminish sensitivities of DTI to motion, susceptibility variations, and EPI artifacts at ultra-high field. Outer-volume suppression (OVS) was applied in DTI acquisitions at 2- and 1-mm2 resolutions, b=1000 s/mm2, and six diffusion directions, resulting in scans of 7- and 14-min durations. Mean apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values were measured in various fiber tract locations at the two resolutions and compared. Geometric distortion and signal-to-noise ratio (SNR) were additionally measured and compared for reduced-FOV and full-FOV DTI scans. Up to an eight-fold data reduction was achieved using DTI-OVS with SENSE at 1 mm2, and geometric distortion was halved. The localization of fiber tracts was improved, enabling targeted FA and ADC measurements. Significant differences in diffusion properties were observed between resolutions for a number of regions suggesting that FA values are impacted by partial volume effects even at a 2-mm2 resolution. The combined SENSE DTI-OVS approach allows large reductions in DTI data acquisition and provides improved quality for high-resolution diffusion studies of the human brain. PMID:23541390

  1. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  2. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  3. Efficient parallel reconstruction for high resolution multishot spiral diffusion data with low rank constraint.

    PubMed

    Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui

    2017-03-01

    To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  5. Far Infrared Imaging Spectrometer for Large Aperture Infrared Telescope System

    DTIC Science & Technology

    1985-12-01

    resolution Fabry - Perot spectrometer (103 < Resolution < 104) for wavelengths from about 50 to 200 micrometer, employing extended field diffraction limited...photo- metry. The Naval Research Laboratory will provide a high resolution Far Infrared Imaging Spectrometer (FIRIS) using Fabry - Perot techniques in...detectors to provide spatial information. The Fabry - Perot uses electromagnetic coil displacement drivers with a lead screw drive to obtain parallel

  6. Multiplexed EFPI sensors with ultra-high resolution

    NASA Astrophysics Data System (ADS)

    Ushakov, Nikolai; Liokumovich, Leonid

    2014-05-01

    An investigation of performance of multiplexed displacement sensors based on extrinsic Fabry-Perot interferometers has been carried out. We have considered serial and parallel configurations and analyzed the issues and advantages of the both. We have also extended the previously developed baseline demodulation algorithm for the case of a system of multiplexed sensors. Serial and parallel multiplexing schemes have been experimentally implemented with 3 and 4 sensing elements, respectively. For both configurations the achieved baseline standard deviations were between 30 and 200 pm, which is, to the best of our knowledge, more than an order less than any other multiplexed EFPI resolution ever reported.

  7. Parallel versus Serial Processing Dependencies in the Perisylvian Speech Network: A Granger Analysis of Intracranial EEG Data

    ERIC Educational Resources Information Center

    Gow, David W., Jr.; Keller, Corey J.; Eskandar, Emad; Meng, Nate; Cash, Sydney S.

    2009-01-01

    In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant…

  8. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  9. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.

  10. Parallel simulation of tsunami inundation on a large-scale supercomputer

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.

  11. Characterization of Harmonic Signal Acquisition with Parallel Dipole and Multipole Detectors

    NASA Astrophysics Data System (ADS)

    Park, Sung-Gun; Anderson, Gordon A.; Bruce, James E.

    2018-04-01

    Fourier transform ion cyclotron resonance mass spectrometry (FTICR-MS) is a powerful instrument for the study of complex biological samples due to its high resolution and mass measurement accuracy. However, the relatively long signal acquisition periods needed to achieve high resolution can serve to limit applications of FTICR-MS. The use of multiple pairs of detector electrodes enables detection of harmonic frequencies present at integer multiples of the fundamental cyclotron frequency, and the obtained resolving power for a given acquisition period increases linearly with the order of harmonic signal. However, harmonic signal detection also increases spectral complexity and presents challenges for interpretation. In the present work, ICR cells with independent dipole and harmonic detection electrodes and preamplifiers are demonstrated. A benefit of this approach is the ability to independently acquire fundamental and multiple harmonic signals in parallel using the same ions under identical conditions, enabling direct comparison of achieved performance as parameters are varied. Spectra from harmonic signals showed generally higher resolving power than spectra acquired with fundamental signals and equal signal duration. In addition, the maximum observed signal to noise (S/N) ratio from harmonic signals exceeded that of fundamental signals by 50 to 100%. Finally, parallel detection of fundamental and harmonic signals enables deconvolution of overlapping harmonic signals since observed fundamental frequencies can be used to unambiguously calculate all possible harmonic frequencies. Thus, the present application of parallel fundamental and harmonic signal acquisition offers a general approach to improve utilization of harmonic signals to yield high-resolution spectra with decreased acquisition time. [Figure not available: see fulltext.

  12. 3.0 Tesla high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) of the pulmonary circulation: initial experience with a 32-channel phased array coil using a high relaxivity contrast agent.

    PubMed

    Nael, Kambiz; Fenchel, Michael; Krishnam, Mayil; Finn, J Paul; Laub, Gerhard; Ruehm, Stefan G

    2007-06-01

    To evaluate the technical feasibility of high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) with highly accelerated parallel acquisition at 3.0 T using a 32-channel phased array coil, and a high relaxivity contrast agent. Ten adult healthy volunteers (5 men, 5 women, aged 21-66 years) underwent high spatial resolution CE-MRA of the pulmonary circulation. Imaging was performed at 3 T using a 32-channel phase array coil. After intravenous injection of 1 mL of gadobenate dimeglumine (Gd-BOPTA) at 1.5 mL/s, a timing bolus was used to measure the transit time from the arm vein to the main pulmonary artery. Subsequently following intravenous injection of 0.1 mmol/kg of Gd-BOPTA at the same rate, isotropic high spatial resolution data sets (1 x 1 x 1 mm3) CE-MRA of the entire pulmonary circulation were acquired using a fast gradient-recalled echo sequence (TR/TE 3/1.2 milliseconds, FA 18 degrees) and highly accelerated parallel acquisition (GRAPPA x 6) during a 20-second breath hold. The presence of artifact, noise, and image quality of the pulmonary arterial segments were evaluated independently by 2 radiologists. Phantom measurements were performed to assess the signal-to-noise ratio (SNR). Statistical analysis of data was performed by using Wilcoxon rank sum test and 2-sample Student t test. The interobserver variability was tested by kappa coefficient. All studies were of diagnostic quality as determined by both observers. The pulmonary arteries were routinely identified up to fifth-order branches, with definition in the diagnostic range and excellent interobserver agreement (kappa = 0.84, 95% confidence interval 0.77-0.90). Phantom measurements showed significantly lower SNR (P < 0.01) using GRAPPA (17.3 +/- 18.8) compared with measurements without parallel acquisition (58 +/- 49.4). The described 3 T CE-MRA protocol in addition to high T1 relaxivity of Gd-BOPTA provides sufficient SNR to support highly accelerated parallel acquisition (GRAPPA x 6), resulting in acquisition of isotopic (1 x 1 x 1 mm3) voxels over the entire pulmonary circulation in 20 seconds.

  13. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  14. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  15. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)

    1995-01-01

    High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.

  16. High-Resolution Adaptive Optics Test-Bed for Vision Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilks, S C; Thomspon, C A; Olivier, S S

    2001-09-27

    We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefrontmore » correction are done at different wavelengths. Issues associated with these techniques will be discussed.« less

  17. A maximum likelihood method for high resolution proton radiography/proton CT

    NASA Astrophysics Data System (ADS)

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao

    2016-12-01

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  18. A maximum likelihood method for high resolution proton radiography/proton CT.

    PubMed

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2016-12-07

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm -1 to 4.53 lp cm -1 in the 200 MeV beam and from 3.49 lp cm -1 to 5.76 lp cm -1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm -1 to 5.76 lp cm -1 ) or conical beam (from 3.49 lp cm -1 to 5.56 lp cm -1 ). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm -1 for the parallel beam and from 3.03 to 5.15 lp cm -1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  19. Transparent Nanopore Cavity Arrays Enable Highly Parallelized Optical Studies of Single Membrane Proteins on Chip.

    PubMed

    Diederichs, Tim; Nguyen, Quoc Hung; Urban, Michael; Tampé, Robert; Tornow, Marc

    2018-06-13

    Membrane proteins involved in transport processes are key targets for pharmaceutical research and industry. Despite continuous improvements and new developments in the field of electrical readouts for the analysis of transport kinetics, a well-suited methodology for high-throughput characterization of single transporters with nonionic substrates and slow turnover rates is still lacking. Here, we report on a novel architecture of silicon chips with embedded nanopore microcavities, based on a silicon-on-insulator technology for high-throughput optical readouts. Arrays containing more than 14 000 inverted-pyramidal cavities of 50 femtoliter volumes and 80 nm circular pore openings were constructed via high-resolution electron-beam lithography in combination with reactive ion etching and anisotropic wet etching. These cavities feature both, an optically transparent bottom and top cap. Atomic force microscopy analysis reveals an overall extremely smooth chip surface, particularly in the vicinity of the nanopores, which exhibits well-defined edges. Our unprecedented transparent chip design provides parallel and independent fluorescent readout of both cavities and buffer reservoir for unbiased single-transporter recordings. Spreading of large unilamellar vesicles with efficiencies up to 96% created nanopore-supported lipid bilayers, which are stable for more than 1 day. A high lipid mobility in the supported membrane was determined by fluorescent recovery after photobleaching. Flux kinetics of α-hemolysin were characterized at single-pore resolution with a rate constant of 0.96 ± 0.06 × 10 -3 s -1 . Here, we deliver an ideal chip platform for pharmaceutical research, which features high parallelism and throughput, synergistically combined with single-transporter resolution.

  20. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Parallel computing in experimental mechanics and optical measurement: A review (II)

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  2. Enhancing GIS Capabilities for High Resolution Earth Science Grids

    NASA Astrophysics Data System (ADS)

    Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.

    2017-12-01

    Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.

  3. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  4. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  5. High Spatiotemporal Resolution Dynamic Contrast-Enhanced MR Enterography in Crohn Disease Terminal Ileitis Using Continuous Golden-Angle Radial Sampling, Compressed Sensing, and Parallel Imaging.

    PubMed

    Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh

    2015-06-01

    The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.

  6. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  7. Toward 10-km mesh global climate simulations

    NASA Astrophysics Data System (ADS)

    Ohfuchi, W.; Enomoto, T.; Takaya, K.; Yoshioka, M. K.

    2002-12-01

    An atmospheric general circulation model (AGCM) that runs very efficiently on the Earth Simulator (ES) was developed. The ES is a gigantic vector-parallel computer with the peak performance of 40 Tflops. The AGCM, named AFES (AGCM for ES), was based on the version 5.4.02 of an AGCM developed jointly by the Center for Climate System Research, the University of Tokyo and the Japanese National Institute for Environmental Sciences. The AFES was, however, totally rewritten in FORTRAN90 and MPI while the original AGCM was written in FORTRAN77 and not capable of parallel computing. The AFES achieved 26 Tflops (about 65 % of the peak performance of the ES) at resolution of T1279L96 (10-km horizontal resolution and 500-m vertical resolution in middle troposphere to lower stratosphere). Some results of 10- to 20-day global simulations will be presented. At this moment, only short-term simulations are possible due to data storage limitation. As ten tera flops computing is achieved, peta byte data storage are necessary to conduct climate-type simulations at this super-high resolution global simulations. Some possibilities for future research topics in global super-high resolution climate simulations will be discussed. Some target topics are mesoscale structures and self-organization of the Baiu-Meiyu front over Japan, cyclogenecsis over the North Pacific and typhoons around the Japan area. Also improvement in local precipitation with increasing horizontal resolution will be demonstrated.

  8. Analysis techniques for diagnosing runaway ion distributions in the reversed field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J., E-mail: jkim536@wisc.edu; Anderson, J. K.; Capecchi, W.

    2016-11-15

    An advanced neutral particle analyzer (ANPA) on the Madison Symmetric Torus measures deuterium ions of energy ranges 8-45 keV with an energy resolution of 2-4 keV and time resolution of 10 μs. Three different experimental configurations measure distinct portions of the naturally occurring fast ion distributions: fast ions moving parallel, anti-parallel, or perpendicular to the plasma current. On a radial-facing port, fast ions moving perpendicular to the current have the necessary pitch to be measured by the ANPA. With the diagnostic positioned on a tangent line through the plasma core, a chord integration over fast ion density, background neutral density,more » and local appropriate pitch defines the measured sample. The plasma current can be reversed to measure anti-parallel fast ions in the same configuration. Comparisons of energy distributions for the three configurations show an anisotropic fast ion distribution favoring high pitch ions.« less

  9. THz holography in reflection using a high resolution microbolometer array.

    PubMed

    Zolliker, Peter; Hack, Erwin

    2015-05-04

    We demonstrate a digital holographic setup for Terahertz imaging of surfaces in reflection. The set-up is based on a high-power continuous wave (CW) THz laser and a high-resolution (640 × 480 pixel) bolometer detector array. Wave propagation to non-parallel planes is used to reconstruct the object surface that is rotated relative to the detector plane. In addition we implement synthetic aperture methods for resolution enhancement and compare Fourier transform phase retrieval to phase stepping methods. A lateral resolution of 200 μm and a relative phase sensitivity of about 0.4 rad corresponding to a depth resolution of 6 μm are estimated from reconstructed images of two specially prepared test targets, respectively. We highlight the use of digital THz holography for surface profilometry as well as its potential for video-rate imaging.

  10. A fully parallel in time and space algorithm for simulating the electrical activity of a neural tissue.

    PubMed

    Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge

    2016-01-15

    The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.

    PubMed

    van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M

    2017-01-01

    X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.

  12. Multifrequency Ultra-High Resolution Miniature Scanning Microscope Using Microchannel And Solid-State Sensor Technologies And Method For Scanning Samples

    NASA Technical Reports Server (NTRS)

    Wang, Yu (Inventor)

    2006-01-01

    A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.

  13. Fluorinated colloidal gold immunolabels for imaging select proteins in parallel with lipids using high-resolution secondary ion mass spectrometry

    PubMed Central

    Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.

    2014-01-01

    The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327

  14. Review of SPECT collimator selection, optimization, and fabrication for clinical and preclinical imaging

    PubMed Central

    Van Audenhaege, Karen; Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Metzler, Scott D.; Moore, Stephen C.

    2015-01-01

    In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications. PMID:26233207

  15. SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital

    2016-06-15

    Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less

  16. Advances and challenges in cryo ptychography at the Advanced Photon Source.

    PubMed

    Deng, J; Vine, D J; Chen, S; Nashed, Y S G; Jin, Q; Peterka, T; Vogt, S; Jacobsen, C

    Ptychography has emerged as a nondestructive tool to quantitatively study extended samples at a high spatial resolution. In this manuscript, we report on recent developments from our team. We have combined cryo ptychography and fluorescence microscopy to provide simultaneous views of ultrastructure and elemental composition, we have developed multi-GPU parallel computation to speed up ptychographic reconstructions, and we have implemented fly-scan ptychography to allow for faster data acquisition. We conclude with a discussion of future challenges in high-resolution 3D ptychography.

  17. A Computer Simulation of the System-Wide Effects of Parallel-Offset Route Maneuvers

    NASA Technical Reports Server (NTRS)

    Lauderdale, Todd A.; Santiago, Confesor; Pankok, Carl

    2010-01-01

    Most aircraft managed by air-traffic controllers in the National Airspace System are capable of flying parallel-offset routes. This paper presents the results of two related studies on the effects of increased use of offset routes as a conflict resolution maneuver. The first study analyzes offset routes in the context of all standard resolution types which air-traffic controllers currently use. This study shows that by utilizing parallel-offset route maneuvers, significant system-wide savings in delay due to conflict resolution of up to 30% are possible. It also shows that most offset resolutions replace horizontal-vectoring resolutions. The second study builds on the results of the first and directly compares offset resolutions and standard horizontal-vectoring maneuvers to determine that in-trail conflicts are often more efficiently resolved by offset maneuvers.

  18. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  19. Real-time Full-spectral Imaging and Affinity Measurements from 50 Microfluidic Channels using Nanohole Surface Plasmon Resonance†

    PubMed Central

    Lee, Si Hoon; Lindquist, Nathan C.; Wittenberg, Nathan J.; Jordan, Luke R.; Oh, Sang-Hyun

    2012-01-01

    With recent advances in high-throughput proteomics and systems biology, there is a growing demand for new instruments that can precisely quantify a wide range of receptor-ligand binding kinetics in a high-throughput fashion. Here we demonstrate a surface plasmon resonance (SPR) imaging spectroscopy instrument capable of extracting binding kinetics and affinities from 50 parallel microfluidic channels simultaneously. The instrument utilizes large-area (~cm2) metallic nanohole arrays as SPR sensing substrates and combines a broadband light source, a high-resolution imaging spectrometer and a low-noise CCD camera to extract spectral information from every channel in real time with a refractive index resolution of 7.7 × 10−6. To demonstrate the utility of our instrument for quantifying a wide range of biomolecular interactions, each parallel microfluidic channel is coated with a biomimetic supported lipid membrane containing ganglioside (GM1) receptors. The binding kinetics of cholera toxin b (CTX-b) to GM1 are then measured in a single experiment from 50 channels. By combining the highly parallel microfluidic device with large-area periodic nanohole array chips, our SPR imaging spectrometer system enables high-throughput, label-free, real-time SPR biosensing, and its full-spectral imaging capability combined with nanohole arrays could enable integration of SPR imaging with concurrent surface-enhanced Raman spectroscopy. PMID:22895607

  20. High resolution time-to-space conversion of sub-picosecond pulses at 1.55µm by non-degenerate SFG in PPLN crystal.

    PubMed

    Shayovitz, Dror; Herrmann, Harald; Sohler, Wolfgang; Ricken, Raimund; Silberhorn, Christine; Marom, Dan M

    2012-11-19

    We demonstrate high resolution and increased efficiency background-free time-to-space conversion using spectrally resolved non-degenerate and collinear SFG in a bulk PPLN crystal. A serial-to-parallel resolution factor of 95 and a time window of 42 ps were achieved. A 60-fold increase in conversion efficiency slope compared with our previous work using a BBO crystal [D. Shayovitz and D. M. Marom, Opt. Lett. 36, 1957 (2011)] was recorded. Finally the measured 40 GHz narrow linewidth of the output SFG signal implies the possibility to extract phase information by employing coherent detection techniques.

  1. Development of an optical inspection platform for surface defect detection in touch panel glass

    NASA Astrophysics Data System (ADS)

    Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu

    2016-04-01

    An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.

  2. The structure of the electron diffusion region during asymmetric anti-parallel magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Swisdak, M.; Drake, J. F.; Price, L.; Burch, J. L.; Cassak, P.

    2017-12-01

    The structure of the electron diffusion region during asymmetric magnetic reconnection is ex- plored with high-resolution particle-in-cell simulations that focus on an magnetopause event ob- served by the Magnetospheric Multiscale Mission (MMS). A major surprise is the development of a standing, oblique whistler-like structure with regions of intense positive and negative dissipation. This structure arises from high-speed electrons that flow along the magnetosheath magnetic sepa- ratrices, converge in the dissipation region and jet across the x-line into the magnetosphere. The jet produces a region of negative charge and generates intense parallel electric fields that eject the electrons downstream along the magnetospheric separatrices. The ejected electrons produce the parallel velocity-space crescents documented by MMS.

  3. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  4. A possibility of parallel and anti-parallel diffraction measurements on neu- tron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    NASA Astrophysics Data System (ADS)

    Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  5. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  6. Theoretical analysis of microring resonator-based biosensor with high resolution and free of temperature influence

    NASA Astrophysics Data System (ADS)

    Jian, Aoqun; Zou, Lu; Tang, Haiquan; Duan, Qianqian; Ji, Jianlong; Zhang, Qianwu; Zhang, Xuming; Sang, Shengbo

    2017-06-01

    The issue of thermal effects is inevitable for the ultrahigh refractive index (RI) measurement. A biosensor with parallel-coupled dual-microring resonator configuration is proposed to achieve high resolution and free thermal effects measurement. Based on the coupled-resonator-induced transparency effect, the design and principle of the biosensor are introduced in detail, and the performance of the sensor is deduced by simulations. Compared to the biosensor based on a single-ring configuration, the designed biosensor has a 10-fold increased Q value according to the simulation results, thus the sensor is expected to achieve a particularly high resolution. In addition, the output signal of the mathematical model of the proposed sensor can eliminate the thermal influence by adopting an algorithm. This work is expected to have great application potentials in the areas of high-resolution RI measurement, such as biomedical discoveries, virus screening, and drinking water safety.

  7. Simultaneous Multi-Slice fMRI using Spiral Trajectories

    PubMed Central

    Zahneisen, Benjamin; Poser, Benedikt A.; Ernst, Thomas; Stenger, V. Andrew

    2014-01-01

    Parallel imaging methods using multi-coil receiver arrays have been shown to be effective for increasing MRI acquisition speed. However parallel imaging methods for fMRI with 2D sequences show only limited improvements in temporal resolution because of the long echo times needed for BOLD contrast. Recently, Simultaneous Multi-Slice (SMS) imaging techniques have been shown to increase fMRI temporal resolution by factors of four and higher. In SMS fMRI multiple slices can be acquired simultaneously using Echo Planar Imaging (EPI) and the overlapping slices are un-aliased using a parallel imaging reconstruction with multiple receivers. The slice separation can be further improved using the “blipped-CAIPI” EPI sequence that provides a more efficient sampling of the SMS 3D k-space. In this paper a blipped-spiral SMS sequence for ultra-fast fMRI is presented. The blipped-spiral sequence combines the sampling efficiency of spiral trajectories with the SMS encoding concept used in blipped-CAIPI EPI. We show that blipped spiral acquisition can achieve almost whole brain coverage at 3 mm isotropic resolution in 168 ms. It is also demonstrated that the high temporal resolution allows for dynamic BOLD lag time measurement using visual/motor and retinotopic mapping paradigms. The local BOLD lag time within the visual cortex following the retinotopic mapping stimulation of expanding flickering rings is directly measured and easily translated into an eccentricity map of the cortex. PMID:24518259

  8. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  9. High-Resolution Study of the First Stretching Overtones of H3Si79Br.

    PubMed

    Ceausu; Graner; Bürger; Mkadmi; Pracna; Lafferty

    1998-11-01

    The Fourier transform infrared spectrum of monoisotopic H3Si79Br (resolution 7.7 x 10(-3) cm-1) was studied from 4200 to 4520 cm-1, in the region of the first overtones of the Si-H stretching vibration. The investigation of the spectrum revealed the presence of two band systems, the first consisting of one parallel (nu0 = 4340.2002 cm-1) and one perpendicular (nu0 = 4342.1432 cm-1) strong component, and the second of one parallel (nu0 = 4405.789 cm-1) and one perpendicular (nu0 = 4416.233 cm-1) weak component. The rovibrational analysis shows strong local perturbations for both strong and weak systems. Seven hundred eighty-one nonzero-weighted transitions belonging to the strong system [the (200) manifold in the local mode picture] were fitted to a simple model involving a perpendicular component interacting by a weak Coriolis resonance with a parallel component. The most severely perturbed transitions (whose ||obs-calc || values exceeded 3 x 10(-3) cm-1) were given zero weights. The standard deviations of the fit were 1.0 x 10(-3) and 0.69 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. The weak band system, severely perturbed by many "dark" perturbers, was fitted to a model involving one parallel and one perpendicular band, connected by a Coriolis-type resonance. The K" . DeltaK = +10 to +18 subbands of the perpendicular component, which showed very high observed - calculated values ( approximately 0.5 cm-1), were excluded from this calculation. The standard deviations of the fit were 11 x 10(-3) and 13 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. Copyright 1998 Academic Press.

  10. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  11. Image sensor with high dynamic range linear output

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)

    2007-01-01

    Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.

  12. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  13. High-resolution brain SPECT imaging by combination of parallel and tilted detector heads.

    PubMed

    Suzuki, Atsuro; Takeuchi, Wataru; Ishitsu, Takafumi; Morimoto, Yuichi; Kobashi, Keiji; Ueno, Yuichiro

    2015-10-01

    To improve the spatial resolution of brain single-photon emission computed tomography (SPECT), we propose a new brain SPECT system in which the detector heads are tilted towards the rotation axis so that they are closer to the brain. In addition, parallel detector heads are used to obtain the complete projection data set. We evaluated this parallel and tilted detector head system (PT-SPECT) in simulations. In the simulation study, the tilt angle of the detector heads relative to the axis was 45°. The distance from the collimator surface of the parallel detector heads to the axis was 130 mm. The distance from the collimator surface of the tilted detector heads to the origin on the axis was 110 mm. A CdTe semiconductor panel with a 1.4 mm detector pitch and a parallel-hole collimator were employed in both types of detector head. A line source phantom, cold-rod brain-shaped phantom, and cerebral blood flow phantom were evaluated. The projection data were generated by forward-projection of the phantom images using physics models, and Poisson noise at clinical levels was applied to the projection data. The ordered-subsets expectation maximization algorithm with physics models was used. We also evaluated conventional SPECT using four parallel detector heads for the sake of comparison. The evaluation of the line source phantom showed that the transaxial FWHM in the central slice for conventional SPECT ranged from 6.1 to 8.5 mm, while that for PT-SPECT ranged from 5.3 to 6.9 mm. The cold-rod brain-shaped phantom image showed that conventional SPECT could visualize up to 8-mm-diameter rods. By contrast, PT-SPECT could visualize up to 6-mm-diameter rods in upper slices of a cerebrum. The cerebral blood flow phantom image showed that the PT-SPECT system provided higher resolution at the thalamus and caudate nucleus as well as at the longitudinal fissure of the cerebrum compared with conventional SPECT. PT-SPECT provides improved image resolution at not only upper but also at central slices of the cerebrum.

  14. Development of bimolecular fluorescence complementation using rsEGFP2 for detection and super-resolution imaging of protein-protein interactions in live cells

    PubMed Central

    Wang, Sheng; Ding, Miao; Chen, Xuanze; Chang, Lei; Sun, Yujie

    2017-01-01

    Direct visualization of protein-protein interactions (PPIs) at high spatial and temporal resolution in live cells is crucial for understanding the intricate and dynamic behaviors of signaling protein complexes. Recently, bimolecular fluorescence complementation (BiFC) assays have been combined with super-resolution imaging techniques including PALM and SOFI to visualize PPIs at the nanometer spatial resolution. RESOLFT nanoscopy has been proven as a powerful live-cell super-resolution imaging technique. With regard to the detection and visualization of PPIs in live cells with high temporal and spatial resolution, here we developed a BiFC assay using split rsEGFP2, a highly photostable and reversibly photoswitchable fluorescent protein previously developed for RESOLFT nanoscopy. Combined with parallelized RESOLFT microscopy, we demonstrated the high spatiotemporal resolving capability of a rsEGFP2-based BiFC assay by detecting and visualizing specifically the heterodimerization interactions between Bcl-xL and Bak as well as the dynamics of the complex on mitochondria membrane in live cells. PMID:28663931

  15. LTE modeling of inhomogeneous chromospheric structure using high-resolution limb observations

    NASA Technical Reports Server (NTRS)

    Lindsey, C.

    1987-01-01

    The paper discusses considerations relevant to LTE modeling of rough atmospheres. Particular attention is given to the application of recent high-resolution observations of the solar limb in the far-infrared and radio continuum to the modeling of chromospheric spicules. It is explained how the continuum limb observations can be combined with morphological knowledge of spicule structure to model the physical conditions in chromospheric spicules. This discussion forms the basis for a chromospheric model presented in a parallel publication based on observations ranging from 100 microns to 2.6 mm.

  16. A Petascale Non-Hydrostatic Atmospheric Dynamical Core in the HOMME Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tufo, Henry

    The High-Order Method Modeling Environment (HOMME) is a framework for building scalable, conserva- tive atmospheric models for climate simulation and general atmospheric-modeling applications. Its spatial discretizations are based on Spectral-Element (SE) and Discontinuous Galerkin (DG) methods. These are local methods employing high-order accurate spectral basis-functions that have been shown to perform well on massively parallel supercomputers at any resolution and scale particularly well at high resolutions. HOMME provides the framework upon which the CAM-SE community atmosphere model dynamical-core is constructed. In its current incarnation, CAM-SE employs the hydrostatic primitive-equations (PE) of motion, which limits its resolution to simulations coarser thanmore » 0.1 per grid cell. The primary objective of this project is to remove this resolution limitation by providing HOMME with the capabilities needed to build nonhydrostatic models that solve the compressible Euler/Navier-Stokes equations.« less

  17. Parallel robot for micro assembly with integrated innovative optical 3D-sensor

    NASA Astrophysics Data System (ADS)

    Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer

    2002-10-01

    Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.

  18. New Insights into the Nature of Turbulence in the Earth's Magnetosheath Using Magnetospheric MultiScale Mission Data

    NASA Astrophysics Data System (ADS)

    Breuillard, H.; Matteini, L.; Argall, M. R.; Sahraoui, F.; Andriopoulou, M.; Le Contel, O.; Retinò, A.; Mirioni, L.; Huang, S. Y.; Gershman, D. J.; Ergun, R. E.; Wilder, F. D.; Goodrich, K. A.; Ahmadi, N.; Yordanova, E.; Vaivads, A.; Turner, D. L.; Khotyaintsev, Yu. V.; Graham, D. B.; Lindqvist, P.-A.; Chasapis, A.; Burch, J. L.; Torbert, R. B.; Russell, C. T.; Magnes, W.; Strangeway, R. J.; Plaschke, F.; Moore, T. E.; Giles, B. L.; Paterson, W. R.; Pollock, C. J.; Lavraud, B.; Fuselier, S. A.; Cohen, I. J.

    2018-06-01

    The Earth’s magnetosheath, which is characterized by highly turbulent fluctuations, is usually divided into two regions of different properties as a function of the angle between the interplanetary magnetic field and the shock normal. In this study, we make use of high-time resolution instruments on board the Magnetospheric MultiScale spacecraft to determine and compare the properties of subsolar magnetosheath turbulence in both regions, i.e., downstream of the quasi-parallel and quasi-perpendicular bow shocks. In particular, we take advantage of the unprecedented temporal resolution of the Fast Plasma Investigation instrument to show the density fluctuations down to sub-ion scales for the first time. We show that the nature of turbulence is highly compressible down to electron scales, particularly in the quasi-parallel magnetosheath. In this region, the magnetic turbulence also shows an inertial (Kolmogorov-like) range, indicating that the fluctuations are not formed locally, in contrast with the quasi-perpendicular magnetosheath. We also show that the electromagnetic turbulence is dominated by electric fluctuations at sub-ion scales (f > 1 Hz) and that magnetic and electric spectra steepen at the largest-electron scale. The latter indicates a change in the nature of turbulence at electron scales. Finally, we show that the electric fluctuations around the electron gyrofrequency are mostly parallel in the quasi-perpendicular magnetosheath, where intense whistlers are observed. This result suggests that energy dissipation, plasma heating, and acceleration might be driven by intense electrostatic parallel structures/waves, which can be linked to whistler waves.

  19. Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils

    NASA Technical Reports Server (NTRS)

    Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.

    1992-01-01

    The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.

  20. Recent Advances in 3D Time-Resolved Contrast-Enhanced MR Angiography

    PubMed Central

    Riederer, Stephen J.; Haider, Clifton R.; Borisch, Eric A.; Weavers, Paul T.; Young, Phillip M.

    2015-01-01

    Contrast-enhanced MR angiography (CE-MRA) was first introduced for clinical studies approximately 20 years ago. Early work provided 3 to 4 mm spatial resolution with acquisition times in the 30 sec range. Since that time there has been continuing effort to provide improved spatial resolution with reduced acquisition time, allowing high resolution three-dimensional (3D) time-resolved studies. The purpose of this work is to describe how this has been accomplished. Specific technical enablers have been: improved gradients allowing reduced repetition times, improved k-space sampling and reconstruction methods, parallel acquisition particularly in two directions, and improved and higher count receiver coil arrays. These have collectively made high resolution time-resolved studies readily available for many anatomic regions. Depending on the application, approximate 1 mm isotropic resolution is now possible with frame times of several seconds. Clinical applications of time-resolved CE-MRA are briefly reviewed. PMID:26032598

  1. Circuit for high resolution decoding of multi-anode microchannel array detectors

    NASA Technical Reports Server (NTRS)

    Kasle, David B. (Inventor)

    1995-01-01

    A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.

  2. High-resolution whole-brain diffusion MRI at 7T using radiofrequency parallel transmission.

    PubMed

    Wu, Xiaoping; Auerbach, Edward J; Vu, An T; Moeller, Steen; Lenglet, Christophe; Schmitter, Sebastian; Van de Moortele, Pierre-François; Yacoub, Essa; Uğurbil, Kâmil

    2018-03-30

    Investigating the utility of RF parallel transmission (pTx) for Human Connectome Project (HCP)-style whole-brain diffusion MRI (dMRI) data at 7 Tesla (7T). Healthy subjects were scanned in pTx and single-transmit (1Tx) modes. Multiband (MB), single-spoke pTx pulses were designed to image sagittal slices. HCP-style dMRI data (i.e., 1.05-mm resolutions, MB2, b-values = 1000/2000 s/mm 2 , 286 images and 40-min scan) and data with higher accelerations (MB3 and MB4) were acquired with pTx. pTx significantly improved flip-angle detected signal uniformity across the brain, yielding ∼19% increase in temporal SNR (tSNR) averaged over the brain relative to 1Tx. This allowed significantly enhanced estimation of multiple fiber orientations (with ∼21% decrease in dispersion) in HCP-style 7T dMRI datasets. Additionally, pTx pulses achieved substantially lower power deposition, permitting higher accelerations, enabling collection of the same data in 2/3 and 1/2 the scan time or of more data in the same scan time. pTx provides a solution to two major limitations for slice-accelerated high-resolution whole-brain dMRI at 7T; it improves flip-angle uniformity, and enables higher slice acceleration relative to current state-of-the-art. As such, pTx provides significant advantages for rapid acquisition of high-quality, high-resolution truly whole-brain dMRI data. © 2018 International Society for Magnetic Resonance in Medicine.

  3. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  4. Electron temperature and heat load measurements in the COMPASS divertor using the new system of probes

    NASA Astrophysics Data System (ADS)

    Adamek, J.; Seidl, J.; Horacek, J.; Komm, M.; Eich, T.; Panek, R.; Cavalier, J.; Devitre, A.; Peterka, M.; Vondracek, P.; Stöckel, J.; Sestak, D.; Grover, O.; Bilkova, P.; Böhm, P.; Varju, J.; Havranek, A.; Weinzettl, V.; Lovell, J.; Dimitrova, M.; Mitosinkova, K.; Dejarnac, R.; Hron, M.; The COMPASS Team; The EUROfusion MST1 Team

    2017-11-01

    A new system of probes was recently installed in the divertor of tokamak COMPASS in order to investigate the ELM energy density with high spatial and temporal resolution. The new system consists of two arrays of rooftop-shaped Langmuir probes (LPs) used to measure the floating potential or the ion saturation current density and one array of Ball-pen probes (BPPs) used to measure the plasma potential with a spatial resolution of ~3.5 mm. The combination of floating BPPs and LPs yields the electron temperature with microsecond temporal resolution. We report on the design of the new divertor probe arrays and first results of electron temperature profile measurements in ELMy H-mode and L-mode. We also present comparative measurements of the parallel heat flux using the new probe arrays and fast infrared termography (IR) data during L-mode with excellent agreement between both techniques using a heat power transmission coefficient γ  =  7. The ELM energy density {{\\varepsilon }\\parallel } was measured during a set of NBI assisted ELMy H-mode discharges. The peak values of {{\\varepsilon }\\parallel } were compared with those predicted by model and with experimental data from JET, AUG and MAST with a good agreement.

  5. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  6. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  7. Design and performance evaluation of a new high energy parallel hole collimator for radioiodine planar imaging by gamma cameras: Monte Carlo simulation study.

    PubMed

    Moslemi, Vahid; Ashoor, Mansour

    2017-05-01

    In addition to the trade-off between resolution and sensitivity which is a common problem among all types of parallel hole collimators (PCs), obtained images by high energy PCs (HEPCs) suffer from hole-pattern artifact (HPA) due to further septa thickness. In this study, a new design on the collimator has been proposed to improve the trade-off between resolution and sensitivity and to eliminate the HPA. A novel PC, namely high energy extended PC (HEEPC), is proposed and is compared to HEPCs. In the new PC, trapezoidal denticles were added upon the septa in the detector side. The performance of the HEEPCs were evaluated and compared to that of HEPCs using a Monte Carlo-N-particle version5 (MCNP5) simulation. The point spread functions (PSF) of HEPCs and HEEPCs were obtained as well as the various parameters such as resolution, sensitivity, scattering, and penetration ratios, and the HPA of the collimators was assessed. Furthermore, a Picker phantom study was performed to examine the effects of the collimators on the quality of planar images. It was found that the HEEPC D with an identical resolution to that of HEPC C increased sensitivity by 34.7%, and it improved the trade-off between resolution and sensitivity as well as to eliminate the HPA. In the picker phantom study, the HEEPC D indicated the hot and cold lesions with the higher contrast, lower noise, and higher contrast to noise ratio (CNR). Since the HEEPCs modify the shaping of PSFs, they are able to improve the trade-off between the resolution and sensitivity; consequently, planar images can be achieved with higher contrast resolutions. Furthermore, because the HEEPC S reduce the HPA and produce images with a higher CNR, compared to HEPCs, the obtained images by HEEPCs have a higher quality, which can help physicians to provide better diagnosis.

  8. Parallelization of the Flow Field Dependent Variation Scheme for Solving the Triple Shock/Boundary Layer Interaction Problem

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.

    2001-01-01

    A parallelized version of the Flowfield Dependent Variation (FDV) Method is developed to analyze a problem of current research interest, the flowfield resulting from a triple shock/boundary layer interaction. Such flowfields are often encountered in the inlets of high speed air-breathing vehicles including the NASA Hyper-X research vehicle. In order to resolve the complex shock structure and to provide adequate resolution for boundary layer computations of the convective heat transfer from surfaces inside the inlet, models containing over 500,000 nodes are needed. Efficient parallelization of the computation is essential to achieving results in a timely manner. Results from a parallelization scheme, based upon multi-threading, as implemented on multiple processor supercomputers and workstations is presented.

  9. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  10. Analyzing Tropical Waves Using the Parallel Ensemble Empirical Model Decomposition Method: Preliminary Results from Hurricane Sandy

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling

    2013-01-01

    In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.

  11. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  12. Atomic resolution characterization of a SrTiO{sub 3} grain boundary in the STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGibbon, M.M.; Browning, N.D.; Chisholm, M.F.

    This paper uses the complementary techniques of high resolution Z-contrast imaging and PEELS (parallel detection electron energy loss spectroscopy) to investigate the atomic structure and chemistry of a 25 degree symmetric tilt boundary in a bicrystal of the electroceramic SrTiO{sub 3}. The gain boundary is composed of two different boundary structural units which occur in about equal numbers: one which contains Ti-O columns and the other without.

  13. PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  14. Microfluidic local perfusion chambers for the visualization and manipulation of synapses

    PubMed Central

    Taylor, Anne M.; Dieterich, Daniela C.; Ito, Hiroshi T.; Kim, Sally A.; Schuman, Erin M.

    2010-01-01

    Summary The polarized nature of neurons as well as the size and density of synapses complicates the manipulation and visualization of cell biological processes that control synaptic function. Here we developed a microfluidic local perfusion (μLP) chamber to access and manipulate synaptic regions and pre- and post-synaptic compartments in vitro. This chamber directs the formation of synapses in >100 parallel rows connecting separate neuron populations. A perfusion channel transects the parallel rows allowing access to synaptic regions with high spatial and temporal resolution. We used this chamber to investigate synapse-to-nucleus signaling. Using the calcium indicator dye, Fluo-4, we measured changes in calcium at dendrites and somata, following local perfusion of glutamate. Exploiting the high temporal resolution of the chamber, we exposed synapses to “spaced” or “massed” application of glutamate and then examined levels of pCREB in somata. Lastly, we applied the metabotropic receptor agonist, DHPG, to dendrites and observed increases in Arc transcription and Arc transcript localization. PMID:20399729

  15. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  16. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  17. High resolution ultrasonic spectroscopy system for nondestructive evaluation

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1991-01-01

    With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.

  18. A 64Cycles/MB, Luma-Chroma Parallelized H.264/AVC Deblocking Filter for 4K × 2K Applications

    NASA Astrophysics Data System (ADS)

    Shen, Weiwei; Fan, Yibo; Zeng, Xiaoyang

    In this paper, a high-throughput debloking filter is presented for H.264/AVC standard, catering video applications with 4K × 2K (4096 × 2304) ultra-definition resolution. In order to strengthen the parallelism without simply increasing the area, we propose a luma-chroma parallel method. Meanwhile, this work reduces the number of processing cycles, the amount of external memory traffic and the working frequency, by using triple four-stage pipeline filters and a luma-chroma interlaced sequence. Furthermore, it eliminates most unnecessary off-chip memory bandwidth with a highly reusable memory scheme, and adopts a “slide window” buffer scheme. As a result, our design can support 4K × 2K at 30fps applications at the working frequency of only 70.8MHz.

  19. Assessing Coupled Social Ecological Flood Vulnerability from Uttarakhand, India, to the State of New York with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Tellman, B.; Schwarz, B.

    2014-12-01

    This talk describes the development of a web application to predict and communicate vulnerability to floods given publicly available data, disaster science, and geotech cloud capabilities. The proof of concept in Google Earth Engine API with initial testing on case studies in New York and Utterakhand India demonstrates the potential of highly parallelized cloud computing to model socio-ecological disaster vulnerability at high spatial and temporal resolution and in near real time. Cloud computing facilitates statistical modeling with variables derived from large public social and ecological data sets, including census data, nighttime lights (NTL), and World Pop to derive social parameters together with elevation, satellite imagery, rainfall, and observed flood data from Dartmouth Flood Observatory to derive biophysical parameters. While more traditional, physically based hydrological models that rely on flow algorithms and numerical methods are currently unavailable in parallelized computing platforms like Google Earth Engine, there is high potential to explore "data driven" modeling that trades physics for statistics in a parallelized environment. A data driven approach to flood modeling with geographically weighted logistic regression has been initially tested on Hurricane Irene in southeastern New York. Comparison of model results with observed flood data reveals a 97% accuracy of the model to predict flooded pixels. Testing on multiple storms is required to further validate this initial promising approach. A statistical social-ecological flood model that could produce rapid vulnerability assessments to predict who might require immediate evacuation and where could serve as an early warning. This type of early warning system would be especially relevant in data poor places lacking the computing power, high resolution data such as LiDar and stream gauges, or hydrologic expertise to run physically based models in real time. As the data-driven model presented relies on globally available data, the only real time data input required would be typical data from a weather service, e.g. precipitation or coarse resolution flood prediction. However, model uncertainty will vary locally depending upon the resolution and frequency of observed flood and socio-economic damage impact data.

  20. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  1. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  2. Efficient parallel resolution of the simplified transport equations in mixed-dual formulation

    NASA Astrophysics Data System (ADS)

    Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.

    2011-03-01

    A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.

  3. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  4. Fast and efficient molecule detection in localization-based super-resolution microscopy by parallel adaptive histogram equalization.

    PubMed

    Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich

    2013-06-25

    In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.

  5. Watershed Influences on Nearshore Waters Across the Entire US Great Lakes Coastal Region

    EPA Science Inventory

    We have combined three elements of observation to enable a comprehensive characterization of the Great Lakes nearshore that links nearshore conditions with their adjacent coastal watersheds. The three elements are: 1) a shore-parallel, high-resolution survey of the nearshore usin...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lecomte, Roger; Arpin, Louis; Beaudoin, Jean-Franç

    Purpose: LabPET II is a new generation APD-based PET scanner designed to achieve sub-mm spatial resolution using truly pixelated detectors and highly integrated parallel front-end processing electronics. Methods: The basic element uses a 4×8 array of 1.12×1.12 mm{sup 2} Lu{sub 1.9}Y{sub 0.1}SiO{sub 5}:Ce (LYSO) scintillator pixels with one-to-one coupling to a 4×8 pixelated monolithic APD array mounted on a ceramic carrier. Four detector arrays are mounted on a daughter board carrying two flip-chip, 64-channel, mixed-signal, application-specific integrated circuits (ASIC) on the backside interfacing to two detector arrays each. Fully parallel signal processing was implemented in silico by encoding time andmore » energy information using a dual-threshold Time-over-Threshold (ToT) scheme. The self-contained 128-channel detector module was designed as a generic component for ultra-high resolution PET imaging of small to medium-size animals. Results: Energy and timing performance were optimized by carefully setting ToT thresholds to minimize the noise/slope ratio. ToT spectra clearly show resolved 511 keV photopeak and Compton edge with ToT resolution well below 10%. After correction for nonlinear ToT response, energy resolution is typically 24±2% FWHM. Coincidence time resolution between opposing 128-channel modules is below 4 ns FWHM. Initial imaging results demonstrate that 0.8 mm hot spots of a Derenzo phantom can be resolved. Conclusion: A new generation PET scanner featuring truly pixelated detectors was developed and shown to achieve a spatial resolution approaching the physical limit of PET. Future plans are to integrate a small-bore dedicated mouse version of the scanner within a PET/CT platform.« less

  7. High Resolution Simulations of Arctic Sea Ice, 1979-1993

    DTIC Science & Technology

    2003-01-01

    William H. Lipscomb * PO[ARISSP To evaluate improvements in modelling Arctic sea ice, we compare results from two regional models at 1/120 horizontal...resolution. The first is a coupled ice-ocean model of the Arctic Ocean, consisting of an ocean model (adapted from the Parallel Ocean Program, Los...Alamos National Laboratory [LANL]) and the "old" sea ice model . The second model uses the same grid but consists of an improved "new" sea ice model (LANL

  8. Global Swath and Gridded Data Tiling

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.

    2012-01-01

    This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.

  9. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  10. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  11. Evidence for parallel consolidation of motion direction and orientation into visual short-term memory.

    PubMed

    Rideaux, Reuben; Apthorp, Deborah; Edwards, Mark

    2015-02-12

    Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage. © 2015 ARVO.

  12. A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy.

    PubMed

    Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George

    2018-05-01

    Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  13. Grammatical Role Parallelism Influences Ambiguous Pronoun Resolution in German

    PubMed Central

    Sauermann, Antje; Gagarina, Natalia

    2017-01-01

    Previous research on pronoun resolution in German revealed that personal pronouns in German tend to refer to the subject or topic antecedents, however, these results are based on studies involving subject personal pronouns. We report a visual world eye-tracking study that investigated the impact of the word order and grammatical role parallelism on the online comprehension of pronouns in German-speaking adults. Word order of the antecedents and parallelism by the grammatical role of the anaphor was modified in the study. The results show that parallelism of the grammatical role had an early and strong effect on the processing of the pronoun, with subject anaphors being resolved to subject antecedents and object anaphors to object antecedents, regardless of the word order (information status) of the antecedents. Our results demonstrate that personal pronouns may not in general be associated with the subject or topic of a sentence but that their resolution is modulated by additional factors such as the grammatical role. Further studies are required to investigate whether parallelism also affects offline antecedent choices. PMID:28790940

  14. An Integrated Set of Observations to Link Conditions of Great Lakes Nearshore Waters to their Coastal Watersheds

    EPA Science Inventory

    We combine three elements for a comprehensive characterization that links nearshore conditions with coastal watershed disturbance metrics. The three elements are: 1) a shore-parallel, high-resolution nearshore survey using continuous in situ towed sensors; 2) a spatially-balanc...

  15. Parallel MR Imaging with Accelerations Beyond the Number of Receiver Channels Using Real Image Reconstruction.

    PubMed

    Ji, Jim; Wright, Steven

    2005-01-01

    Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.

  16. SNSPD with parallel nanowires (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ejrnaes, Mikkel; Parlato, Loredana; Gaggero, Alessandro; Mattioli, Francesco; Leoni, Roberto; Pepe, Giampiero; Cristiano, Roberto

    2017-05-01

    Superconducting nanowire single-photon detectors (SNSPDs) have shown to be promising in applications such as quantum communication and computation, quantum optics, imaging, metrology and sensing. They offer the advantages of a low dark count rate, high efficiency, a broadband response, a short time jitter, a high repetition rate, and no need for gated-mode operation. Several SNSPD designs have been proposed in literature. Here, we discuss the so-called parallel nanowires configurations. They were introduced with the aim of improving some SNSPD property like detection efficiency, speed, signal-to-noise ratio, or photon number resolution. Although apparently similar, the various parallel designs are not the same. There is no one design that can improve the mentioned properties all together. In fact, each design presents its own characteristics with specific advantages and drawbacks. In this work, we will discuss the various designs outlining peculiarities and possible improvements.

  17. First results of high-resolution modeling of Cenozoic subduction orogeny in Andes

    NASA Astrophysics Data System (ADS)

    Liu, S.; Sobolev, S. V.; Babeyko, A. Y.; Krueger, F.; Quinteros, J.; Popov, A.

    2016-12-01

    The Andean Orogeny is the result of the upper-plate crustal shortening during the Cenozoic Nazca plate subduction beneath South America plate. With up to 300 km shortening, the Earth's second highest Altiplano-Puna Plateau was formed with a pronounced N-S oriented deformation diversity. Furthermore, the tectonic shortening in the Southern Andes was much less intensive and started much later. The mechanism of the shortening and the nature of N-S variation of its magnitude remain controversial. The previous studies of the Central Andes suggested that they might be related to the N-S variation in the strength of the lithosphere, friction coupling at slab interface, and are probably influenced by the interaction of the climate and tectonic systems. However, the exact nature of the strength variation was not explored due to the lack of high numerical resolution and 3D numerical models at that time. Here we will employ large-scale subduction models with a high resolution to reveal and quantify the factors controlling the strength of lithospheric structures and their effect on the magnitude of tectonic shortening in the South America plate between 18°-35°S. These high-resolution models are performed by using the highly scalable parallel 3D code LaMEM (Lithosphere and Mantle Evolution Model). This code is based on finite difference staggered grid approach and employs massive linear and non-linear solvers within the PETSc library to complete high-performance MPI-based parallelization in geodynamic modeling. Currently, in addition to benchmark-models we are developing high-resolution (< 1km) 2D subduction models with application to Nazca-South America convergence. In particular, we will present the models focusing on the effect of friction reduction in the Paleozoic-Cenozoic sediments above the uppermost crust in the Subandean Ranges. Future work will be focused on the origin of different styles of deformation and topography evolution in Altiplano-Puna Plateau and Central-Southern Andes through 3D modeling of large-scale interaction of subducting and overriding plates.

  18. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  19. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  20. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  1. GENESIS: new self-consistent models of exoplanetary spectra

    NASA Astrophysics Data System (ADS)

    Gandhi, Siddharth; Madhusudhan, Nikku

    2017-12-01

    We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.

  2. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less

  3. Towards circuit optogenetics.

    PubMed

    Chen, I-Wen; Papagiakoumou, Eirini; Emiliani, Valentina

    2018-06-01

    Optogenetics neuronal targeting combined with single-photon wide-field illumination has already proved its enormous potential in neuroscience, enabling the optical control of entire neuronal networks and disentangling their role in the control of specific behaviors. However, establishing how a single or a sub-set of neurons controls a specific behavior, or how functionally identical neurons are connected in a particular task, or yet how behaviors can be modified in real-time by the complex wiring diagram of neuronal connections requires more sophisticated approaches enabling to drive neuronal circuits activity with single-cell precision and millisecond temporal resolution. This has motivated on one side the development of flexible optical methods for two-photon (2P) optogenetic activation using either, or a hybrid of two approaches: scanning and parallel illumination. On the other side, it has stimulated the engineering of new opsins with modified spectral characteristics, channel kinetics and spatial distribution of expression, offering the necessary flexibility of choosing the appropriate opsin for each application. The need for optical manipulation of multiple targets with millisecond temporal resolution has imposed three-dimension (3D) parallel holographic illumination as the technique of choice for optical control of neuronal circuits organized in 3D. Today 3D parallel illumination exists in several complementary variants, each with a different degree of simplicity, light uniformity, temporal precision and axial resolution. In parallel, the possibility to reach hundreds of targets in 3D volumes has prompted the development of low-repetition rate amplified laser sources enabling high peak power, while keeping low average power for stimulating each cell. All together those progresses open the way for a precise optical manipulation of neuronal circuits with unprecedented precision and flexibility. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. High-resolution dynamic pressure sensor array based on piezo-phototronic effect tuned photoluminescence imaging.

    PubMed

    Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin

    2015-03-24

    A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.

  5. Design and theoretical investigation of a digital x-ray detector with large area and high spatial resolution

    NASA Astrophysics Data System (ADS)

    Gui, Jianbao; Guo, Jinchuan; Yang, Qinlao; Liu, Xin; Niu, Hanben

    2007-05-01

    X-ray phase contrast imaging is a promising new technology today, but the requirements of a digital detector with large area, high spatial resolution and high sensitivity bring forward a large challenge to researchers. This paper is related to the design and theoretical investigation of an x-ray direct conversion digital detector based on mercuric iodide photoconductive layer with the latent charge image readout by photoinduced discharge (PID). Mercuric iodide has been verified having a good imaging performance (high sensitivity, low dark current, low voltage operation and good lag characteristics) compared with the other competitive materials (α-Se,PbI II,CdTe,CdZnTe) and can be easily deposited on large substrates in the manner of polycrystalline. By use of line scanning laser beam and parallel multi-electrode readout make the system have high spatial resolution and fast readout speed suitable for instant general radiography and even rapid sequence radiography.

  6. 6 x 6-cm fully depleted pn-junction CCD for high-resolution spectroscopy in the 0.1- to 15-keV photon energy range

    NASA Astrophysics Data System (ADS)

    von Zanthier, Christoph; Holl, Peter; Kemmer, Josef; Lechner, Peter; Maier, B.; Soltau, Heike; Stoetter, R.; Braeuninger, Heinrich W.; Dennerl, Konrad; Haberl, Frank; Hartmann, R.; Hartner, Gisela D.; Hippmann, H.; Kastelic, E.; Kink, W.; Krause, N.; Meidinger, Norbert; Metzner, G.; Pfeffermann, Elmar; Popp, M.; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Truemper, Joachim; Weber, U.; Carathanassis, D.; Engelhard, S.; Gebhart, Th.; Hauff, D.; Lutz, G.; Richter, R. H.; Seitz, H.; Solc, P.; Bihler, Edgar; Boettcher, H.; Kendziorra, Eckhard; Kraemer, J.; Pflueger, Bernhard; Staubert, Ruediger

    1998-04-01

    The concept and performance of the fully depleted pn- junction CCD system, developed for the European XMM- and the German ABRIXAS-satellite missions for soft x-ray imaging and spectroscopy in the 0.1 keV to 15 keV photon range, is presented. The 58 mm X 60 mm large pn-CCD array uses pn- junctions for registers and for the backside instead of MOS registers. This concept naturally allows to fully deplete the detector volume to make it an efficient detector to photons with energies up to 15 keV. For high detection efficiency in the soft x-ray region down to 100 eV, an ultrathin pn-CCD backside deadlayer has been realized. Each pn-CCD-channel is equipped with an on-chip JFET amplifier which, in combination with the CAMEX-amplifier and multiplexing chip, facilitates parallel readout with a pixel read rate of 3 MHz and an electronic noise floor of ENC < e-. With the complete parallel readout, very fast pn-CCD readout modi can be implemented in the system which allow for high resolution photon spectroscopy of even the brightest x-ray sources in the sky.

  7. smoothG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.

    2017-12-12

    smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found

  8. Dumping Low and High Resolution Graphics on the Apple IIe Microcomputer System.

    ERIC Educational Resources Information Center

    Fletcher, Richard K., Jr.; Ruckman, Frank, Jr.

    This paper discusses and outlines procedures for obtaining a hard copy of the graphic output of a microcomputer or "dumping a graphic" using the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Hardware configurations and instructions for high…

  9. Waveform inversion for 3-D earth structure using the Direct Solution Method implemented on vector-parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko

    2004-08-01

    We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.

  10. A Subsystem Test Bed for Chinese Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Zhao, An; Yan, Yihua; Wang, Wei

    2014-11-01

    The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.

  11. Slip-parallel seismic lineations on the Northern Hayward Fault, California

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.; Cole, A.

    1999-01-01

    A high-resolution relative earthquake location procedure is used to image the fine-scale seismicity structure of the northern Hayward fault, California. The seismicity defines a narrow, near-vertical fault zone containing horizontal alignments of hypocenters extending along the fault zone. The lineations persist over the 15-year observation interval, implying the localization of conditions on the fault where brittle failure conditions are met. The horizontal orientation of the lineations parallels the slip direction of the fault, suggesting that they are the result of the smearing of frictionally weak material along the fault plane over thousands of years.

  12. BLIPPED (BLIpped Pure Phase EncoDing) high resolution MRI with low amplitude gradients

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Balcom, Bruce J.

    2017-12-01

    MRI image resolution is proportional to the maximum k-space value, i.e. the temporal integral of the magnetic field gradient. High resolution imaging usually requires high gradient amplitudes and/or long spatial encoding times. Special gradient hardware is often required for high amplitudes and fast switching. We propose a high resolution imaging sequence that employs low amplitude gradients. This method was inspired by the previously proposed PEPI (π Echo Planar Imaging) sequence, which replaced EPI gradient reversals with multiple RF refocusing pulses. It has been shown that when the refocusing RF pulse is of high quality, i.e. sufficiently close to 180°, the magnetization phase introduced by the spatial encoding magnetic field gradient can be preserved and transferred to the following echo signal without phase rewinding. This phase encoding scheme requires blipped gradients that are identical for each echo, with low and constant amplitude, providing opportunities for high resolution imaging. We now extend the sequence to 3D pure phase encoding with low amplitude gradients. The method is compared with the Hybrid-SESPI (Spin Echo Single Point Imaging) technique to demonstrate the advantages in terms of low gradient duty cycle, compensation of concomitant magnetic field effects and minimal echo spacing, which lead to superior image quality and high resolution. The 3D imaging method was then applied with a parallel plate resonator RF probe, achieving a nominal spatial resolution of 17 μm in one dimension in the 3D image, requiring a maximum gradient amplitude of only 5.8 Gauss/cm.

  13. k-space and q-space: combining ultra-high spatial and angular resolution in diffusion imaging using ZOOPPA at 7 T.

    PubMed

    Heidemann, Robin M; Anwander, Alfred; Feiweier, Thorsten; Knösche, Thomas R; Turner, Robert

    2012-04-02

    There is ongoing debate whether using a higher spatial resolution (sampling k-space) or a higher angular resolution (sampling q-space angles) is the better way to improve diffusion MRI (dMRI) based tractography results in living humans. In both cases, the limiting factor is the signal-to-noise ratio (SNR), due to the restricted acquisition time. One possible way to increase the spatial resolution without sacrificing either SNR or angular resolution is to move to a higher magnetic field strength. Nevertheless, dMRI has not been the preferred application for ultra-high field strength (7 T). This is because single-shot echo-planar imaging (EPI) has been the method of choice for human in vivo dMRI. EPI faces several challenges related to the use of a high resolution at high field strength, for example, distortions and image blurring. These problems can easily compromise the expected SNR gain with field strength. In the current study, we introduce an adapted EPI sequence in conjunction with a combination of ZOOmed imaging and Partially Parallel Acquisition (ZOOPPA). We demonstrate that the method can produce high quality diffusion-weighted images with high spatial and angular resolution at 7 T. We provide examples of in vivo human dMRI with isotropic resolutions of 1 mm and 800 μm. These data sets are particularly suitable for resolving complex and subtle fiber architectures, including fiber crossings in the white matter, anisotropy in the cortex and fibers entering the cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0

    NASA Astrophysics Data System (ADS)

    Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.

    2015-11-01

    The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.

  15. [Basic examination of an imagecharacteristic in Multivane].

    PubMed

    Ohshita, Tsuyoshi

    2011-01-01

    Deterioration in the image because of a patient's movement always becomes a problem in the MRI inspection. To solve this problem, the imaging procedure named Multivane was developed. The principle is similar to the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) method. As for Multivane, the effect of the body motion correction is high. However, the filling method of k space is different than a past Cartesian method. A basic examination of the image characteristic of Multivane and Cartesian was utilized along with geostationary phantom. The examination items are SNR, CNR, and a spatial resolution. As a result, Multivane of SNR was high. Cartesian of the contrast and the spatial resolution was also high. It is important to recognize these features and to use Multivane.

  16. 30-lens interferometer for high energy x-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyubomirskiy, M., E-mail: lyubomir@esrf.fr; Snigireva, I., E-mail: irina@esrf.fr; Vaughan, G.

    2016-07-27

    We report a hard X-ray multilens interferometer consisting of 30 parallel compound refractive lenses. Under coherent illumination each CRL creates a diffraction limited focal spot - secondary source. An overlapping of coherent beams from these sources resulting in the interference pattern which has a rich longitudinal structure in accordance with the Talbot imaging formalism. The proposed interferometer was experimentally tested at ID11 ESRF beamline for the photon energies 32 keV and 65 keV. The fundamental and fractional Talbot images were recorded with the high resolution CCD camera. An effective source size in the order of 15 µm was determined frommore » the first Talbot image proving that the multilens interferometer can be used as a high resolution beam diagnostic tool.« less

  17. Characteristics of a high pressure gas proportional counter filled with xenon

    NASA Technical Reports Server (NTRS)

    Sakurai, H.; Ramsey, B. D.

    1991-01-01

    The characteristics of a conventional cylindrical geometry proportional counter filled with high pressure xenon gas up to 10 atm. were fundamentally investigated for use as a detector in hard X-ray astronomy. With a 2 percent methane gas mixture the energy resolutions at 10 atm. were 9.8 percent and 7.3 percent for 22 keV and 60 keV X-rays, respectively. From calculations of the Townsend ionization coefficient, it is shown that proportional counters at high pressure operate at weaker reduced electric field than low pressure counters. The characteristics of a parallel grid proportional counter at low pressure showed similar pressure dependence. It is suggested that this is the fundamental reason for the degradation of resolution observed with increasing pressure.

  18. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning

    PubMed Central

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986

  19. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    PubMed

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  20. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration proceeding... the receivership includes the resolution of claims made by customers; or (3) A petition filed under... any of the foregoing with knowledge of a parallel proceeding shall promptly notify the Commission, by...

  1. Measurements of Thermal Conductivity of Superfluid Helium Near its Transition Temperature T(sub lambda) in a 2D Confinement

    NASA Technical Reports Server (NTRS)

    Jerebets, Sergei

    2004-01-01

    We report our recent experiments on thermal conductivity measurements of superfluid He-4 near its phase transition in a two-dimensional (2D) confinement under saturated vapor pressure. A 2D confinement is created by 2-mm- and 1-mm-thick glass capillary plates, consisting of densely populated parallel microchannels with cross-sections of 5 x 50 and 1 x 10 microns, correspondingly. A heat current (2 < Q < 400 nW/sq cm) was applied along the channels long direction. High-resolution measurements were provided by DC SQUID-based high-resolution paramagnetic salt thermometers (HRTs) with a nanokelvin resolution. We might find that thermal conductivity of confined helium is finite at the bulk superfluid transition temperature. Our 2D results will be compared with those in a bulk and 1D confinement.

  2. Megavolt parallel potentials arising from double-layer streams in the Earth's outer radiation belt.

    PubMed

    Mozer, F S; Bale, S D; Bonnell, J W; Chaston, C C; Roth, I; Wygant, J

    2013-12-06

    Huge numbers of double layers carrying electric fields parallel to the local magnetic field line have been observed on the Van Allen probes in connection with in situ relativistic electron acceleration in the Earth's outer radiation belt. For one case with adequate high time resolution data, 7000 double layers were observed in an interval of 1 min to produce a 230,000 V net parallel potential drop crossing the spacecraft. Lower resolution data show that this event lasted for 6 min and that more than 1,000,000 volts of net parallel potential crossed the spacecraft during this time. A double layer traverses the length of a magnetic field line in about 15 s and the orbital motion of the spacecraft perpendicular to the magnetic field was about 700 km during this 6 min interval. Thus, the instantaneous parallel potential along a single magnetic field line was the order of tens of kilovolts. Electrons on the field line might experience many such potential steps in their lifetimes to accelerate them to energies where they serve as the seed population for relativistic acceleration by coherent, large amplitude whistler mode waves. Because the double-layer speed of 3100  km/s is the order of the electron acoustic speed (and not the ion acoustic speed) of a 25 eV plasma, the double layers may result from a new electron acoustic mode. Acceleration mechanisms involving double layers may also be important in planetary radiation belts such as Jupiter, Saturn, Uranus, and Neptune, in the solar corona during flares, and in astrophysical objects.

  3. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  4. The role of parallelism in the real-time processing of anaphora.

    PubMed

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  5. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  6. A seismic reflection image for the base of a tectonic plate.

    PubMed

    Stern, T A; Henrys, S A; Okaya, D; Louie, J N; Savage, M K; Lamb, S; Sato, H; Sutherland, R; Iwasaki, T

    2015-02-05

    Plate tectonics successfully describes the surface of Earth as a mosaic of moving lithospheric plates. But it is not clear what happens at the base of the plates, the lithosphere-asthenosphere boundary (LAB). The LAB has been well imaged with converted teleseismic waves, whose 10-40-kilometre wavelength controls the structural resolution. Here we use explosion-generated seismic waves (of about 0.5-kilometre wavelength) to form a high-resolution image for the base of an oceanic plate that is subducting beneath North Island, New Zealand. Our 80-kilometre-wide image is based on P-wave reflections and shows an approximately 15° dipping, abrupt, seismic wave-speed transition (less than 1 kilometre thick) at a depth of about 100 kilometres. The boundary is parallel to the top of the plate and seismic attributes indicate a P-wave speed decrease of at least 8 ± 3 per cent across it. A parallel reflection event approximately 10 kilometres deeper shows that the decrease in P-wave speed is confined to a channel at the base of the plate, which we interpret as a sheared zone of ponded partial melts or volatiles. This is independent, high-resolution evidence for a low-viscosity channel at the LAB that decouples plates from mantle flow beneath, and allows plate tectonics to work.

  7. A three-wavelength multi-channel brain functional imager based on digital lock-in photon-counting technique

    NASA Astrophysics Data System (ADS)

    Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng

    2018-02-01

    During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.

  8. Parallel trends in cortical gray and white matter architecture and connections in primates allow fine study of pathways in humans and reveal network disruptions in autism

    PubMed Central

    García-Cabezas, Miguel Ángel; Barbas, Helen

    2018-01-01

    Noninvasive imaging and tractography methods have yielded information on broad communication networks but lack resolution to delineate intralaminar cortical and subcortical pathways in humans. An important unanswered question is whether we can use the wealth of precise information on pathways from monkeys to understand connections in humans. We addressed this question within a theoretical framework of systematic cortical variation and used identical high-resolution methods to compare the architecture of cortical gray matter and the white matter beneath, which gives rise to short- and long-distance pathways in humans and rhesus monkeys. We used the prefrontal cortex as a model system because of its key role in attention, emotions, and executive function, which are processes often affected in brain diseases. We found striking parallels and consistent trends in the gray and white matter architecture in humans and monkeys and between the architecture and actual connections mapped with neural tracers in rhesus monkeys and, by extension, in humans. Using the novel architectonic portrait as a base, we found significant changes in pathways between nearby prefrontal and distant areas in autism. Our findings reveal that a theoretical framework allows study of normal neural communication in humans at high resolution and specific disruptions in diverse psychiatric and neurodegenerative diseases. PMID:29401206

  9. ADHydro: A Parallel Implementation of a Large-scale High-Resolution Multi-Physics Distributed Water Resources Model Using the Charm++ Run Time System

    NASA Astrophysics Data System (ADS)

    Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.

    2014-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.

  10. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  11. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  12. Recycling isoelectric focusing with computer controlled data acquisition system. [for high resolution electrophoretic separation and purification of biomolecules

    NASA Technical Reports Server (NTRS)

    Egen, N. B.; Twitty, G. E.; Bier, M.

    1979-01-01

    Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.

  13. Use of high-resolution ground-penetrating radar in kimberlite delineation

    USGS Publications Warehouse

    Kruger, J.M.; Martinez, A.; Berendsen, P.

    1997-01-01

    High-resolution ground-penetrating radar (GPR) was used to image the near-surface extent of two exposed Late Cretaceous kimberlites intruded into lower Permian limestone and dolomite host rocks in northeast Kansas. Six parallel GPR profiles identify the margin of the Randolph 1 kimberlite by the up-bending and termination of limestone reflectors. Five radially-intersecting GPR profiles identify the elliptical margin of the Randolph 2 kimberlite by the termination of dolomite reflectors near or below the kimberlite's mushroom-shaped cap. These results suggest GPR may augment magnetic methods for the delineation of kimberlites or other forceful intrusions in a layered host rock where thick, conductive soil or shale is not present at the surface.

  14. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  15. High-resolution ionization detector and array of such detectors

    DOEpatents

    McGregor, Douglas S [Ypsilanti, MI; Rojeski, Ronald A [Pleasanton, CA

    2001-01-16

    A high-resolution ionization detector and an array of such detectors are described which utilize a reference pattern of conductive or semiconductive material to form interaction, pervious and measurement regions in an ionization substrate of, for example, CdZnTe material. The ionization detector is a room temperature semiconductor radiation detector. Various geometries of such a detector and an array of such detectors produce room temperature operated gamma ray spectrometers with relatively high resolution. For example, a 1 cm.sup.3 detector is capable of measuring .sup.137 Cs 662 keV gamma rays with room temperature energy resolution approaching 2% at FWHM. Two major types of such detectors include a parallel strip semiconductor Frisch grid detector and the geometrically weighted trapezoid prism semiconductor Frisch grid detector. The geometrically weighted detector records room temperature (24.degree. C.) energy resolutions of 2.68% FWHM for .sup.137 Cs 662 keV gamma rays and 2.45% FWHM for .sup.60 Co 1.332 MeV gamma rays. The detectors perform well without any electronic pulse rejection, correction or compensation techniques. The devices operate at room temperature with simple commercially available NIM bin electronics and do not require special preamplifiers or cooling stages for good spectroscopic results.

  16. Parallel, confocal, and complete spectrum imager for fluorescent detection of high-density microarray

    NASA Astrophysics Data System (ADS)

    Bogdanov, Valery L.; Boyce-Jacino, Michael

    1999-05-01

    Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.

  17. Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems

    NASA Astrophysics Data System (ADS)

    Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.

    2016-12-01

    We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.

  18. Water Selective Imaging and bSSFP Banding Artifact Correction in Humans and Small Animals at 3T and 7T, Respectively

    PubMed Central

    Ribot, Emeline J.; Wecker, Didier; Trotier, Aurélien J.; Dallaudière, Benjamin; Lefrançois, William; Thiaudière, Eric; Franconi, Jean-Michel; Miraux, Sylvain

    2015-01-01

    Introduction The purpose of this paper is to develop an easy method to generate both fat signal and banding artifact free 3D balanced Steady State Free Precession (bSSFP) images at high magnetic field. Methods In order to suppress fat signal and bSSFP banding artifacts, two or four images were acquired with the excitation frequency of the water-selective binomial radiofrequency pulse set On Resonance or shifted by a maximum of 3/4TR. Mice and human volunteers were imaged at 7T and 3T, respectively to perform whole-body and musculoskeletal imaging. “Sum-Of-Square” reconstruction was performed and combined or not with parallel imaging. Results The frequency selectivity of 1-2-3-2-1 or 1-3-3-1 binomial pulses was preserved after (3/4TR) frequency shifting. Consequently, whole body small animal 3D imaging was performed at 7T and enabled visualization of small structures within adipose tissue like lymph nodes. In parallel, this method allowed 3D musculoskeletal imaging in humans with high spatial resolution at 3T. The combination with parallel imaging allowed the acquisition of knee images with ~500μm resolution images in less than 2min. In addition, ankles, full head coverage and legs of volunteers were imaged, demonstrating the possible application of the method also for large FOV. Conclusion In conclusion, this robust method can be applied in small animals and humans at high magnetic fields. The high SNR and tissue contrast obtained in short acquisition times allows to prescribe bSSFP sequence for several preclinical and clinical applications. PMID:26426849

  19. Three-dimensional laser microvision.

    PubMed

    Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y

    2001-04-10

    A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.

  20. Multishot PROPELLER for high-field preclinical MRI.

    PubMed

    Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F; Johnson, G Allan

    2010-07-01

    With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T(2)-weighted imaging using PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI meets this need. The two-shot PROPELLER technique presented here provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and noninvasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The two-shot modification introduced here retains more high-frequency information and provides higher signal-to-noise ratio than conventional single-shot PROPELLER, making this sequence feasible at high fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. (c) 2010 Wiley-Liss, Inc.

  1. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  2. A new collimator for I-123-IMP SPECT imaging of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyamada, H.; Fukukita, H.; Tanaka, E.

    1985-05-01

    At present, commercially available I-123-IMP is contaminated with I-124 and its concentration on the assay date is said to be approximately 5%. Therefore, the application of medium energy parallel hole collimator (MEPC) used in many places for SPECT results in deterioration of the image quality. Recently, the authors have developed a new collimator for I-123-IMP SPECT imaging comprised of 4 slat type units; ultrahigh resolution (UHR), high resolution (HR), high sensitivity (HS), and ultrahigh sensitivity (UHS). The slit width/septum thickness in mm for UHR, HR, HS, and UHS are 0.9/0.5, 1.5/0.85, 3.2/1.5, and 5.2/2.0, respectively. In practice, either UHR ormore » HR is set to the detector (Shimadzu LFOV-E, modified type) together with either HS or UHS. The former is always set to the detector with the slit direction parallel to the rotation axis, and the latter is set with its slit direction at a right angle to the former. This is based on an idea that, upon sacrifice of resolution to some extent, sensitivity can be gained on the axial direction while the resolution on the transaxial slice will still be sufficiently preserved. Resolutions (transaxial direction/axial direction) in FWHM (mm) for each combination (UHR-HS, UHR-UHS, HR-HS, and HR-UHS) were 15.9/31.4, 15.9/36.5,23.2/33.3, and 23.9/40.7, respectively, whereas the resolution of MEPC was 28.7/29.5. On the other hand, relative sensitivities to MEPC were 0.57, 0.86, 0.80, and 1.16. The authors conclude that the combination of UHR and HS is best suited for clinical practice and, at present they are obtaining I-123-IMP SPECT images of good quality.« less

  3. Sharp-Tip Silver Nanowires Mounted on Cantilevers for High-Aspect-Ratio High-Resolution Imaging.

    PubMed

    Ma, Xuezhi; Zhu, Yangzhi; Kim, Sanggon; Liu, Qiushi; Byrley, Peter; Wei, Yang; Zhang, Jin; Jiang, Kaili; Fan, Shoushan; Yan, Ruoxue; Liu, Ming

    2016-11-09

    Despite many efforts to fabricate high-aspect-ratio atomic force microscopy (HAR-AFM) probes for high-fidelity, high-resolution topographical imaging of three-dimensional (3D) nanostructured surfaces, current HAR probes still suffer from unsatisfactory performance, low wear-resistivity, and extravagant prices. The primary objective of this work is to demonstrate a novel design of a high-resolution (HR) HAR AFM probe, which is fabricated through a reliable, cost-efficient benchtop process to precisely implant a single ultrasharp metallic nanowire on a standard AFM cantilever probe. The force-displacement curve indicated that the HAR-HR probe is robust against buckling and bending up to 150 nN. The probes were tested on polymer trenches, showing a much better image fidelity when compared with standard silicon tips. The lateral resolution, when scanning a rough metal thin film and single-walled carbon nanotubes (SW-CNTs), was found to be better than 8 nm. Finally, stable imaging quality in tapping mode was demonstrated for at least 15 continuous scans indicating high resistance to wear. These results demonstrate a reliable benchtop fabrication technique toward metallic HAR-HR AFM probes with performance parallel or exceeding that of commercial HAR probes, yet at a fraction of their cost.

  4. Parallel Force Assay for Protein-Protein Interactions

    PubMed Central

    Aschenbrenner, Daniela; Pippig, Diana A.; Klamecka, Kamila; Limmer, Katja; Leonhardt, Heinrich; Gaub, Hermann E.

    2014-01-01

    Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay. PMID:25546146

  5. Parallel force assay for protein-protein interactions.

    PubMed

    Aschenbrenner, Daniela; Pippig, Diana A; Klamecka, Kamila; Limmer, Katja; Leonhardt, Heinrich; Gaub, Hermann E

    2014-01-01

    Quantitative proteome research is greatly promoted by high-resolution parallel format assays. A characterization of protein complexes based on binding forces offers an unparalleled dynamic range and allows for the effective discrimination of non-specific interactions. Here we present a DNA-based Molecular Force Assay to quantify protein-protein interactions, namely the bond between different variants of GFP and GFP-binding nanobodies. We present different strategies to adjust the maximum sensitivity window of the assay by influencing the binding strength of the DNA reference duplexes. The binding of the nanobody Enhancer to the different GFP constructs is compared at high sensitivity of the assay. Whereas the binding strength to wild type and enhanced GFP are equal within experimental error, stronger binding to superfolder GFP is observed. This difference in binding strength is attributed to alterations in the amino acids that form contacts according to the crystal structure of the initial wild type GFP-Enhancer complex. Moreover, we outline the potential for large-scale parallelization of the assay.

  6. High-resolution, high-throughput imaging with a multibeam scanning electron microscope.

    PubMed

    Eberle, A L; Mikula, S; Schalek, R; Lichtman, J; Knothe Tate, M L; Zeidler, D

    2015-08-01

    Electron-electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  7. A 780 × 800 μm2 Multichannel Digital Silicon Photomultiplier With Column-Parallel Time-to-Digital Converter and Basic Characterization

    NASA Astrophysics Data System (ADS)

    Mandai, Shingo; Jain, Vishwas; Charbon, Edoardo

    2014-02-01

    This paper presents a digital silicon photomultiplier (SiPM) partitioned in columns, whereas each column is connected to a column-parallel time-to-digital converter (TDC), in order to improve the timing resolution of single-photon detection. By reducing the number of pixels per TDC using a sharing scheme with three TDCs per column, the pixel-to-pixel skew is reduced. We report the basic characterization of the SiPM, comprising 416 single-photon avalanche diodes (SPADs); the characterization includes photon detection probability, dark count rate, afterpulsing, and crosstalk. We achieved 264-ps full-width at half maximum timing resolution of single-photon detection using a 48-fold column-parallel TDC with a temporal resolution of 51.8 ps (least significant bit), fully integrated in standard complementary metal-oxide semiconductor technology.

  8. Precision optical slit for high heat load or ultra high vacuum

    DOEpatents

    Andresen, N.C.; DiGennaro, R.S.; Swain, T.L.

    1995-01-24

    This invention relates generally to slits used in optics that must be precisely aligned and adjusted. The optical slits of the present invention are useful in x-ray optics, x-ray beam lines, optical systems in which the entrance slit is critical for high wavelength resolution. The invention is particularly useful in ultra high vacuum systems where lubricants are difficult to use and designs which avoid the movement of metal parts against one another are important, such as monochromators for high wavelength resolution with ultra high vacuum systems. The invention further relates to optical systems in which temperature characteristics of the slit materials is important. The present invention yet additionally relates to precision slits wherein the opposing edges of the slit must be precisely moved relative to a center line between the edges with each edge retaining its parallel orientation with respect to the other edge and/or the center line. 21 figures.

  9. Precision optical slit for high heat load or ultra high vacuum

    DOEpatents

    Andresen, Nord C.; DiGennaro, Richard S.; Swain, Thomas L.

    1995-01-01

    This invention relates generally to slits used in optics that must be precisely aligned and adjusted. The optical slits of the present invention are useful in x-ray optics, x-ray beam lines, optical systems in which the entrance slit is critical for high wavelength resolution. The invention is particularly useful in ultra high vacuum systems where lubricants are difficult to use and designs which avoid the movement of metal parts against one another are important, such as monochrometers for high wavelength resolution with ultra high vacuum systems. The invention further relates to optical systems in which temperature characteristics of the slit materials is important. The present invention yet additionally relates to precision slits wherein the opposing edges of the slit must be precisely moved relative to a center line between the edges with each edge retaining its parallel orientation with respect to the other edge and/or the center line.

  10. Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.

    2004-01-01

    A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.

  11. MMS Observations of Parallel Electric Fields During a Quasi-Perpendicular Bow Shock Crossing

    NASA Astrophysics Data System (ADS)

    Goodrich, K.; Schwartz, S. J.; Ergun, R.; Wilder, F. D.; Holmes, J.; Burch, J. L.; Gershman, D. J.; Giles, B. L.; Khotyaintsev, Y. V.; Le Contel, O.; Lindqvist, P. A.; Strangeway, R. J.; Russell, C.; Torbert, R. B.

    2016-12-01

    Previous observations of the terrestrial bow shock have frequently shown large-amplitude fluctuations in the parallel electric field. These parallel electric fields are seen as both nonlinear solitary structures, such as double layers and electron phase-space holes, and short-wavelength waves, which can reach amplitudes greater than 100 mV/m. The Magnetospheric Multi-Scale (MMS) Mission has crossed the Earth's bow shock more than 200 times. The parallel electric field signatures observed in these crossings are seen in very discrete packets and evolve over time scales of less than a second, indicating the presence of a wealth of kinetic-scale activity. The high time resolution of the Fast Particle Instrument (FPI) available on MMS offers greater detail of the kinetic-scale physics that occur at bow shocks than ever before, allowing greater insight into the overall effect of these observed electric fields. We present a characterization of these parallel electric fields found in a single bow shock event and how it reflects the kinetic-scale activity that can occur at the terrestrial bow shock.

  12. The Observing Modes of JWST/NIRISS

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; NIRISS Team

    2018-06-01

    The Near Infrared Imager and Slitless Spectrograph (NIRISS) is a contribution of the Canadian Space Agency to the James Webb Space Telescope (JWST). NIRISS complements the other near-infrared science instruments onboard JWST by providing capabilities for (a) low resolution grism spectroscopy between 0.8 and 2.2 µm over the entire field of view, with the possibility of observing the same scene with orthogonal dispersion directions to disentangle blended objects; (b) medium-resolution grism spectroscopy between 0.6 and 2.8 µm that has been optimized to provide high spectrophotometric stability for time-series observations of transiting exoplanets; (c) aperture masking interferometry that provides high angular resolution of 70 - 400 mas at wavelengths between 2.8 and 4.8 µm and (d) parallel imaging through a set of filters that are closely matched to NIRCam's.In this poster, we discuss each of these modes and present simulations of how they might typically be used to address specific scientific questions.

  13. A parallel algorithm for 2D visco-acoustic frequency-domain full-waveform inversion: application to a dense OBS data set

    NASA Astrophysics Data System (ADS)

    Sourbier, F.; Operto, S.; Virieux, J.

    2006-12-01

    We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.

  14. High-Resolution 3T MR Imaging of the Triangular Fibrocartilage Complex

    PubMed Central

    von Borstel, Donald; Wang, Michael; Small, Kirstin; Nozaki, Taiki; Yoshioka, Hiroshi

    2017-01-01

    This study is intended as a review of 3Tesla (T) magnetic resonance (MR) imaging of the triangular fibrocartilage complex (TFCC). The recent advances in MR imaging, which includes high field strength magnets, multi-channel coils, and isotropic 3-dimensional (3D) sequences have enabled the visualization of precise TFCC anatomy with high spatial and contrast resolution. In addition to the routine wrist protocol, there are specific techniques used to optimize 3T imaging of the wrist; including driven equilibrium sequence (DRIVE), parallel imaging, and 3D imaging. The coil choice for 3T imaging of the wrist depends on a number of variables, and the proper coil design selection is critical for high-resolution wrist imaging with high signal and contrast-to-noise ratio. The TFCC is a complex structure and is composed of the articular disc (disc proper), the triangular ligament, the dorsal and volar radioulnar ligaments, the meniscus homologue, the ulnar collateral ligament (UCL), the extensor carpi ulnaris (ECU) tendon sheath, and the ulnolunate and ulnotriquetral ligaments. The Palmer classification categorizes TFCC lesions as traumatic (type 1) or degenerative (type 2). In this review article, we present clinical high-resolution MR images of normal TFCC anatomy and TFCC injuries with this classification system. PMID:27535592

  15. High-Resolution 3T MR Imaging of the Triangular Fibrocartilage Complex.

    PubMed

    von Borstel, Donald; Wang, Michael; Small, Kirstin; Nozaki, Taiki; Yoshioka, Hiroshi

    2017-01-10

    This study is intended as a review of 3Tesla (T) magnetic resonance (MR) imaging of the triangular fibrocartilage complex (TFCC). The recent advances in MR imaging, which includes high field strength magnets, multi-channel coils, and isotropic 3-dimensional (3D) sequences have enabled the visualization of precise TFCC anatomy with high spatial and contrast resolution. In addition to the routine wrist protocol, there are specific techniques used to optimize 3T imaging of the wrist; including driven equilibrium sequence (DRIVE), parallel imaging, and 3D imaging. The coil choice for 3T imaging of the wrist depends on a number of variables, and the proper coil design selection is critical for high-resolution wrist imaging with high signal and contrast-to-noise ratio. The TFCC is a complex structure and is composed of the articular disc (disc proper), the triangular ligament, the dorsal and volar radioulnar ligaments, the meniscus homologue, the ulnar collateral ligament (UCL), the extensor carpi ulnaris (ECU) tendon sheath, and the ulnolunate and ulnotriquetral ligaments. The Palmer classification categorizes TFCC lesions as traumatic (type 1) or degenerative (type 2). In this review article, we present clinical high-resolution MR images of normal TFCC anatomy and TFCC injuries with this classification system.

  16. High-speed high-resolution epifluorescence imaging system using CCD sensor and digital storage for neurobiological research

    NASA Astrophysics Data System (ADS)

    Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi

    2001-04-01

    We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.

  17. Resolution of x-ray parabolic compound refractive diamond lens defined at the home laboratory

    NASA Astrophysics Data System (ADS)

    Polyakov, S. N.; Zholudev, S. I.; Gasilov, S. V.; Martyushov, S. Yu.; Denisov, V. N.; Terentiev, S. A.; Blank, V. D.

    2017-05-01

    Here we demonstrate performance of an original lab system designed for testing of X-ray parabolic compound refractive lenses (CRL) manufactured from a high-quality single-crystalline synthetic diamond grown by the high-pressure hightemperature technique. The basic parameters of a diamond CRL comprised from 28 plano-concave lenses such as the focal length of 634 mm, transmissivity of 0.36, field of view of 1 mm and resolution of 6 µm have been determined. Usually such measurements are performed on synchrotron radiation facilities. In this work characterization of CRL was performed by means of instruments and components that are available for laboratories such as the Rigaku 9kW rotating anode X-ray generator, the PANalytical parallel beam X-ray mirror, a 6 m long optical bench, high precision multi-axis goniometers, high resolution X-ray emulsion films, and ultra-fast high-sensitive X-ray area detector PIXel3D. Developed setup was used to find differences between experimental and design parameters, which is very important for the improvement of CRLs manufacturing technology.

  18. Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ma, Xinrong; Duan, Zhijian

    2018-04-01

    High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.

  19. A two-ply polymer-based flexible tactile sensor sheet using electric capacitance.

    PubMed

    Guo, Shijie; Shiraoka, Takahisa; Inada, Seisho; Mukai, Toshiharu

    2014-01-29

    Traditional capacitive tactile sensor sheets usually have a three-layered structure, with a dielectric layer sandwiched by two electrode layers. Each electrode layer has a number of parallel ribbon-like electrodes. The electrodes on the two electrode layers are oriented orthogonally and each crossing point of the two perpendicular electrode arrays makes up a capacitive sensor cell on the sheet. It is well known that compatibility between measuring precision and resolution is difficult, since decreasing the width of the electrodes is required to obtain a high resolution, however, this may lead to reduction of the area of the sensor cells, and as a result, lead to a low Signal/Noise (S/N) ratio. To overcome this problem, a new multilayered structure and related calculation procedure are proposed. This new structure stacks two or more sensor sheets with shifts in position. Both a high precision and a high resolution can be obtained by combining the signals of the stacked sensor sheets. Trial production was made and the effect was confirmed.

  20. ARC-1989-AC89-7046

    NASA Image and Video Library

    1989-08-25

    P-34764 Voyager 2 obtained this high resolution color image of Neptune's large satellite Triton during its close flyby. Approximately a dozen individual images were combined to produce this comprehensive view of the Neptune-facing hemisphere of Triton. Fine detail is provided by high resolution, clear-filter images, with color information added from lower resolution frames. The large south polar cap at the bottom of the image is highly refective and slightly pink in color , and may consist of a slowly evaporating layer of nitrogen ice deposited during the previous winter. From the ragged edge of the polar cap northward the satellite's face is generously darker and redder in color. This coloring may be produced by the action of ultraviolet light and magnetospheric radiation upon methane in the atmosphere and surface. Running across this darker region , approximately parallel to the edge of the polar cap, is a band of brighter white material that is almost bluish in color. The underlying topography in this bright band is similiar, however to that in the darker, redder regions surrounding it.

  1. Use of PZT's for adaptive control of Fabry-Perot etalon plate figure

    NASA Technical Reports Server (NTRS)

    Skinner, WIlbert; Niciejewski, R.

    2005-01-01

    A Fabry Perot etalon, consisting of two spaced and reflective glass flats, provides the mechanism by which high resolution spectroscopy may be performed over narrow spectral regions. Space based applications include direct measurements of Doppler shifts of airglow absorption and emission features and the Doppler broadening of spectral lines. The technique requires a high degree of parallelism between the two flats to be maintained through harsh launch conditions. Monitoring and adjusting the plate figure by illuminating the Fabry Perot interferometer with a suitable monochromatic source may be performed on orbit to actively control of the parallelism of the flats. This report describes the use of such a technique in a laboratory environment applied to a piezo-electric stack attached to the center of a Fabry Perot etalon.

  2. Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes.

    PubMed

    Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan

    2018-05-11

    Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.

  3. Organization and Dynamics of Receptor Proteins in a Plasma Membrane.

    PubMed

    Koldsø, Heidi; Sansom, Mark S P

    2015-11-25

    The interactions of membrane proteins are influenced by their lipid environment, with key lipid species able to regulate membrane protein function. Advances in high-resolution microscopy can reveal the organization and dynamics of proteins and lipids within living cells at resolutions <200 nm. Parallel advances in molecular simulations provide near-atomic-resolution models of the dynamics of the organization of membranes of in vivo-like complexity. We explore the dynamics of proteins and lipids in crowded and complex plasma membrane models, thereby closing the gap in length and complexity between computations and experiments. Our simulations provide insights into the mutual interplay between lipids and proteins in determining mesoscale (20-100 nm) fluctuations of the bilayer, and in enabling oligomerization and clustering of membrane proteins.

  4. Spiral Transformation for High-Resolution and Efficient Sorting of Optical Vortex Modes

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhui; Chremmos, Ioannis; Chen, Yujie; Zhu, Jiangbo; Zhang, Yanfeng; Yu, Siyuan

    2018-05-01

    Mode sorting is an essential function for optical multiplexing systems that exploit the orthogonality of the orbital angular momentum mode space. The familiar log-polar optical transformation provides a simple yet efficient approach whose resolution is, however, restricted by a considerable overlap between adjacent modes resulting from the limited excursion of the phase along a complete circle around the optical vortex axis. We propose and experimentally verify a new optical transformation that maps spirals (instead of concentric circles) to parallel lines. As the phase excursion along a spiral in the wave front of an optical vortex is theoretically unlimited, this new optical transformation can separate orbital angular momentum modes with superior resolution while maintaining unity efficiency.

  5. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  6. High-resolution fiber-optic microendoscopy for in situ cellular imaging.

    PubMed

    Pierce, Mark; Yu, Dihua; Richards-Kortum, Rebecca

    2011-01-11

    Many biological and clinical studies require the longitudinal study and analysis of morphology and function with cellular level resolution. Traditionally, multiple experiments are run in parallel, with individual samples removed from the study at sequential time points for evaluation by light microscopy. Several intravital techniques have been developed, with confocal, multiphoton, and second harmonic microscopy all demonstrating their ability to be used for imaging in situ. With these systems, however, the required infrastructure is complex and expensive, involving scanning laser systems and complex light sources. Here we present a protocol for the design and assembly of a high-resolution microendoscope which can be built in a day using off-the-shelf components for under US$5,000. The platform offers flexibility in terms of image resolution, field-of-view, and operating wavelength, and we describe how these parameters can be easily modified to meet the specific needs of the end user. We and others have explored the use of the high-resolution microendoscope (HRME) in in vitro cell culture, in excised and living animal tissues, and in human tissues in vivo. Users have reported the use of several different fluorescent contrast agents, including proflavine, benzoporphyrin-derivative monoacid ring A (BPD-MA), and fluoroscein, all of which have received full, or investigational approval from the FDA for use in human subjects. High-resolution microendoscopy, in the form described here, may appeal to a wide range of researchers working in the basic and clinical sciences. The technique offers an effective and economical approach which complements traditional benchtop microscopy, by enabling the user to perform high-resolution, longitudinal imaging in situ.

  7. High resolution absorption spectrum of CO2between 1750 and 2000 Å. 2. Rotational analysis of two parallel-type bands assigned to the lowest electronic transition 13B2←

    NASA Astrophysics Data System (ADS)

    Cossart-Magos, Claudina; Launay, Françoise; Parkin, James E.

    The absorption spectrum of CO2 gas between 175 and 200 nm was photographed at high resolution some years ago. This very weak spectral region proved to be extremely rich in bands showing rotational fine structure. In Part 1 [C. Cossart-Magos, F. Launay, J. E. Parkin, Mol. Phys., 75, 835 (1992), nine perpendicular-type bands were assigned to the lowest singlet-singlet transition, 11A2 ← ν'3 (b2) vibration. Here, the parallel-type bands observed at 185.7 and 175.6 nm are assigned to the lowest triplet-singlet transition, 13B2 ← TMPH0629math005 ν'2 (a1) vibration. The assignment and the rotational and spin constant values obtained are discussed in relation to previous experimental data and ab initio calculation results on the lowest excited states of CO2. The actual role of the 13B2 state in CO2 photodissociation, O(3P)+CO(X1Σ+) recombination, and O(1D) emission quenching by CO(X) molecules is reviewed.

  8. Non-CAR resists and advanced materials for Massively Parallel E-Beam Direct Write process integration

    NASA Astrophysics Data System (ADS)

    Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

    2016-03-01

    The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.

  9. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    PubMed

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    PubMed

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  11. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    NASA Astrophysics Data System (ADS)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.

  12. Acoustic phonons in chrysotile asbestos probed by high-resolution inelastic x-ray scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamontov, Eugene; Vakhrushev, S. B.; Kumzerov, Yu. A,

    Acoustic phonons in an individual, oriented fiber of chrysotile asbestos (chemical formula Mg{sub 3}Si{sub 2}O{sub 5}(OH){sub 4}) were observed at room temperature in the inelastic x-ray measurement with a very high (meV) resolution. The x-ray scattering vector was aligned along [1 0 0] direction of the reciprocal lattice, nearly parallel to the long axis of the fiber. The latter coincides with [1 0 0] direction of the direct lattice and the axes of the nano-channels. The data were analyzed using a damped harmonic oscillator model. Analysis of the phonon dispersion in the first Brillouin zone yielded the longitudinal sound velocitymore » of (9200 {+-} 600) m/s.« less

  13. Lightweight and High-Resolution Single Crystal Silicon Optics for X-ray Astronomy

    NASA Technical Reports Server (NTRS)

    Zhang, William W.; Biskach, Michael P.; Chan, Kai-Wing; Mazzarella, James R.; McClelland, Ryan S.; Riveros, Raul E.; Saha, Timo T.; Solly, Peter M.

    2016-01-01

    We describe an approach to building mirror assemblies for next generation X-ray telescopes. It incorporates knowledge and lessons learned from building existing telescopes, including Chandra, XMM-Newton, Suzaku, and NuSTAR, as well as from our direct experience of the last 15 years developing mirror technology for the Constellation-X and International X-ray Observatory mission concepts. This approach combines single crystal silicon and precision polishing, thus has the potential of achieving the highest possible angular resolution with the least possible mass. Moreover, it is simple, consisting of several technical elements that can be developed independently in parallel. Lastly, it is highly amenable to mass production, therefore enabling the making of telescopes of very large photon collecting areas.

  14. Fast 3D Surface Extraction 2 pages (including abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.

    Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less

  15. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  16. Dissecting Cell-Type Composition and Activity-Dependent Transcriptional State in Mammalian Brains by Massively Parallel Single-Nucleus RNA-Seq.

    PubMed

    Hu, Peng; Fabyanic, Emily; Kwon, Deborah Y; Tang, Sheng; Zhou, Zhaolan; Wu, Hao

    2017-12-07

    Massively parallel single-cell RNA sequencing can precisely resolve cellular diversity in a high-throughput manner at low cost, but unbiased isolation of intact single cells from complex tissues such as adult mammalian brains is challenging. Here, we integrate sucrose-gradient-assisted purification of nuclei with droplet microfluidics to develop a highly scalable single-nucleus RNA-seq approach (sNucDrop-seq), which is free of enzymatic dissociation and nucleus sorting. By profiling ∼18,000 nuclei isolated from cortical tissues of adult mice, we demonstrate that sNucDrop-seq not only accurately reveals neuronal and non-neuronal subtype composition with high sensitivity but also enables in-depth analysis of transient transcriptional states driven by neuronal activity, at single-cell resolution, in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Combined dispersive/interference spectroscopy for producing a vector spectrum

    DOEpatents

    Erskine, David J.

    2002-01-01

    A method of measuring the spectral properties of broadband waves that combines interferometry with a wavelength disperser having many spectral channels to produce a fringing spectrum. Spectral mapping, Doppler shifts, metrology of angles, distances and secondary effects such as temperature, pressure, and acceleration which change an interferometer cavity length can be measured accurately by a compact instrument using broadband illumination. Broadband illumination avoids the fringe skip ambiguities of monochromatic waves. The interferometer provides arbitrarily high spectral resolution, simple instrument response, compactness, low cost, high field of view and high efficiency. The inclusion of a disperser increases fringe visibility and signal to noise ratio over an interferometer used alone for broadband waves. The fringing spectrum is represented as a wavelength dependent 2-d vector, which describes the fringe amplitude and phase. Vector mathematics such as generalized dot products rapidly computes average broadband phase shifts to high accuracy. A Moire effect between the interferometer's sinusoidal transmission and the illumination heterodynes high resolution spectral detail to low spectral detail, allowing the use of a low resolution disperser. Multiple parallel interferometer cavities of fixed delay allow the instantaneous mapping of a spectrum, with an instrument more compact for the same spectral resolution than a conventional dispersive spectrometer, and not requiring a scanning delay.

  18. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  19. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  20. Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.

    2005-06-01

    Although optical coherence tomography (OCT) can axially resolve and detect reflections from individual cells, there are no reports of imaging cells in the living human retina using OCT. To supplement the axial resolution and sensitivity of OCT with the necessary lateral resolution and speed, we developed a novel spectral domain OCT (SD-OCT) camera based on a free-space parallel illumination architecture and equipped with adaptive optics (AO). Conventional flood illumination, also with AO, was integrated into the camera and provided confirmation of the focus position in the retina with an accuracy of ±10.3 μm. Short bursts of narrow B-scans (100x560 μm) of the living retina were subsequently acquired at 500 Hz during dynamic compensation (up to 14 Hz) that successfully corrected the most significant ocular aberrations across a dilated 6 mm pupil. Camera sensitivity (up to 94 dB) was sufficient for observing reflections from essentially all neural layers of the retina. Signal-to-noise of the detected reflection from the photoreceptor layer was highly sensitive to the level of cular aberrations and defocus with changes of 11.4 and 13.1 dB (single pass) observed when the ocular aberrations (astigmatism, 3rd order and higher) were corrected and when the focus was shifted by 200 μm (0.54 diopters) in the retina, respectively. The 3D resolution of the B-scans (3.0x3.0x5.7 μm) is the highest reported to date in the living human eye and was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both lateral and axial dimensions. However, high contrast speckle, which is intrinsic to OCT, was present throughout the AO parallel SD-OCT B-scans and obstructed correlating retinal reflections to cell-sized retinal structures.

  1. Signal enhancement due to high-Z nanofilm electrodes in parallel plate ionization chambers with variable microgaps.

    PubMed

    Brivio, Davide; Sajo, Erno; Zygmanski, Piotr

    2017-12-01

    We developed a method for measuring signal enhancement produced by high-Z nanofilm electrodes in parallel plate ionization chambers with variable thickness microgaps. We used a laboratory-made variable gap parallel plate ionization chamber with nanofilm electrodes made of aluminum-aluminum (Al-Al) and aluminum-tantalum (Al-Ta). The electrodes were evaporated on 1 mm thick glass substrates. The interelectrode air gap was varied from 3 μm to 1 cm. The gap size was measured using a digital micrometer and it was confirmed by capacitance measurements. The electric field in the chamber was kept between 0.1 kV/cm and 1 kV/cm for all the gap sizes by applying appropriate compensating voltages. The chamber was exposed to 120 kVp X-rays. The current was measured using a commercial data acquisition system with temporal resolution of 600 Hz. In addition, radiation transport simulations were carried out to characterize the dose, D(x), high-energy electron current, J(x), and deposited charge, Q(x), as a function of distance, x, from the electrodes. A deterministic method was selected over Monte Carlo due to its ability to produce results with 10 nm spatial resolution without stochastic uncertainties. Experimental signal enhancement ratio, SER(G) which we defined as the ratio of signal for Al-air-Ta to signal for Al-air-Al for each gap size, was compared to computations. The individual contributions of dose, electron current, and charge deposition to the signal enhancement were determined. Experimental signals matched computed data for all gap sizes after accounting for several contributions to the signal: (a) charge carrier generated via ionization due to the energy deposited in the air gap, D(x); (b) high-energy electron current, J(x), leaking from high-Z electrode (Ta) toward low-Z electrode (Al); (c) deposited charge in the air gap, Q(x); and (d) the decreased collection efficiency for large gaps (>~500 μm). Q(x) accounts for the electrons below 100 eV, which are regarded as stopped by the radiation transport code but which can move and form electron current in small gaps (<100 μm). While the total energy deposited in the air gap increases with gap size for both samples, the average high-energy current and deposited charge are moderately decreasing with the air gap. When gap sizes are smaller than ~20 μm, the contribution to signal from dose approaches zero while contributions from high-energy current and deposited charges give rise to an offset signal. The measured signal enhancement ratio (SER) was 40.0 ± 5.0 for the 3 μm gap and rapidly decreasing with gap size down to 9.9 ± 1.2 for the 21 μm gap and to 6.6 ± 0.3 for the 100 μm gap. The uncertainties in SER were mostly due to uncertainties in gap size and data acquisition system. We developed an experimental method to determine the signal enhancement due to high-Z nanolayers in parallel plate ionization chambers with micrometer spatial resolution. As the water-equivalent thicknesses of these air gaps are 3 nm to 10 μm, the method may also be applicable for nanoscopic spatial resolution of other gap materials. The method may be extended to solid insulator materials with low Z. © 2017 American Association of Physicists in Medicine.

  2. A robust multi-shot scan strategy for high-resolution diffusion weighted MRI enabled by multiplexed sensitivity-encoding (MUSE)

    PubMed Central

    Chen, Nan-kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W.

    2013-01-01

    Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3 Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies. PMID:23370063

  3. Dependence of energy resolution of a plane-parallel HPGe detector on bias voltage upon registration of low-energy X-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samedov, V. V., E-mail: v-samedov@yandex.ru

    2016-12-15

    In this study, we theoretically analyze the processes in a plane-parallel high-purity germanium (HPGe) detector. The generating function of factorial moments describing the process of registration of low-energy X-rays by the HPGe detector with consideration of capture of charge carriers by traps is obtained. It is demonstrated that the coefficients of expansion of the average signal amplitude and variance in power series over the quantity inversely proportional to the bias voltage of the detector allow one to determine the Fano factor, the product of the charge carrier lifetime and mobility, and other characteristics of the semiconductor material of the detector.

  4. Dynamical diffraction imaging (topography) with X-ray synchrotron radiation

    NASA Technical Reports Server (NTRS)

    Kuriyama, M.; Steiner, B. W.; Dobbyn, R. C.

    1989-01-01

    By contrast to electron microscopy, which yields information on the location of features in small regions of materials, X-ray diffraction imaging can portray minute deviations from perfect crystalline order over larger areas. Synchrotron radiation-based X-ray optics technology uses a highly parallel incident beam to eliminate ambiguities in the interpretation of image details; scattering phenomena previously unobserved are now readily detected. Synchrotron diffraction imaging renders high-resolution, real-time, in situ observations of materials under pertinent environmental conditions possible.

  5. Choice of Grating Orientation for Evaluation of Peripheral Vision

    PubMed Central

    Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda

    2016-01-01

    ABSTRACT Purpose Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Methods Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. Results For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Conclusions Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation. PMID:26889822

  6. Choice of Grating Orientation for Evaluation of Peripheral Vision.

    PubMed

    Venkataraman, Abinaya Priya; Winter, Simon; Rosén, Robert; Lundström, Linda

    2016-06-01

    Peripheral resolution acuity depends on the orientation of the stimuli. However, it is uncertain if such a meridional effect also exists for peripheral detection tasks because they are affected by optical errors. Knowledge of the quantitative differences in acuity for different grating orientations is crucial for choosing the appropriate stimuli for evaluations of peripheral resolution and detection tasks. We assessed resolution and detection thresholds for different grating orientations in the peripheral visual field. Resolution and detection thresholds were evaluated for gratings of four different orientations in eight different visual field meridians in the 20-deg visual field in white light. Detection measurements in monochromatic light (543 nm; bandwidth, 10 nm) were also performed to evaluate the effects of chromatic aberration on the meridional effect. A combination of trial lenses and adaptive optics system was used to correct the monochromatic lower- and higher-order aberrations. For both resolution and detection tasks, gratings parallel to the visual field meridian had better threshold compared with the perpendicular gratings, whereas the two oblique gratings had similar thresholds. The parallel and perpendicular grating acuity differences for resolution and detection tasks were 0.16 logMAR and 0.11 logMAD, respectively. Elimination of chromatic errors did not affect the meridional preference in detection acuity. Similar to peripheral resolution, detection also shows a meridional effect that appears to have a neural origin. The threshold difference seen for parallel and perpendicular gratings suggests the use of two oblique gratings as stimuli in alternative forced-choice procedures for peripheral vision evaluation to reduce measurement variation.

  7. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  8. Imaging characteristics of scintimammography using parallel-hole and pinhole collimators

    NASA Astrophysics Data System (ADS)

    Tsui, B. M. W.; Wessell, D. E.; Zhao, X. D.; Wang, W. T.; Lewis, D. P.; Frey, E. C.

    1998-08-01

    The purpose of the study is to investigate the imaging characteristics of scintimammography (SM) using parallel-hole (PR) and pinhole (PN) collimators in a clinical setting. Experimental data were acquired from a phantom that models the breast with small lesions using a low energy high resolution (LEHR) PR and a PN collimator. At close distances, the PN collimator provides better spatial resolution and higher detection efficiency than the PR collimator, at the expense of a smaller field-of-view (FOV). Detection of small breast lesions can be further enhanced by noise smoothing, field uniformity correction, scatter subtraction and resolution recovery filtering. Monte Carlo (MC) simulation data were generated from the 3D MCAT phantom that realistically models the Tc-99m sestamibi uptake and attenuation distributions in an average female patient. For both PR and PN collimation, the scatter to primary ratio (S/P) decreases from the base of the breast to the nipple and is higher in the left than right breast due to scatter of photons from the heart. Results from the study add to understanding of the imaging characteristics of SM using PR and PN collimators and assist in the design of data acquisition and image processing methods to enhance the detection of breast lesions using SM.

  9. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  10. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  11. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  12. High precision electric gate for time-of-flight ion mass spectrometers

    NASA Technical Reports Server (NTRS)

    Sittler, Edward C. (Inventor)

    2011-01-01

    A time-of-flight mass spectrometer having a chamber with electrodes to generate an electric field in the chamber and electric gating for allowing ions with a predetermined mass and velocity into the electric field. The design uses a row of very thin parallel aligned wires that are pulsed in sequence so the ion can pass through the gap of two parallel plates, which are biased to prevent passage of the ion. This design by itself can provide a high mass resolution capability and a very precise start pulse for an ion mass spectrometer. Furthermore, the ion will only pass through the chamber if it is within a wire diameter of the first wire when it is pulsed and has the right speed so it is near all other wires when they are pulsed.

  13. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    PubMed

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  14. Aberration-free superresolution imaging via binary speckle pattern encoding and processing

    NASA Astrophysics Data System (ADS)

    Ben-Eliezer, Eyal; Marom, Emanuel

    2007-04-01

    We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.

  15. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    NASA Astrophysics Data System (ADS)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (< 12km) data-assimilating regional climate modelling/monitoring systems based on numerical weather prediction (NWP) cores. This is driven by requirements of improving process understanding, better representation of land surface interactions, atmospheric convection, orographic effects, and better forecasting on shorter timescales. This is relevant for the GRACE community since (1) these models may provide improved atmospheric mass separation / de-aliasing and smaller topography-induced errors, compared to global (ECMWF-Op, ERA-Interim) data, (2) they inherit high temporal resolution from NWP models, (3) parallel efforts towards improving the land surface component and coupling groundwater models; this may provide realistic hydrological mass estimates with sub-diurnal resolution, (4) parallel efforts towards re-analyses, with the aim of providing consistent time series. (5) On the other hand, GRACE can help validating models and aids in the identification of processes needing improvement. A coupled atmosphere - land surface - groundwater modelling system is currently being implemented for the European CORDEX region at 12.5 km resolution, based on the TerrSysMP platform (COSMO-EU NWP, CLM land surface and ParFlow groundwater models). We report results from Springer et al. (J. Hydromet., accept.) on validating the water cycle in COSMO-EU using GRACE and precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  16. Room temperature X- and gamma-ray detectors using thallium bromide crystals

    NASA Astrophysics Data System (ADS)

    Hitomi, K.; Muroi, O.; Shoji, T.; Suehiro, T.; Hiratate, Y.

    1999-10-01

    Thallium bromide (TlBr) is a compound semiconductor with wide band gap (2.68eV) and high X- and γ-ray stopping power. The TlBr crystals were grown by the horizontal travelling molten zone (TMZ) method using purified material. Two types of room temperature X- and γ-ray detectors were fabricated from the TlBr crystals: TlBr detectors with high detection efficiency for positron annihilation γ-ray (511keV) detection and TlBr detectors with high-energy resolution for low-energy X-ray detection. The detector of the former type demonstrated energy resolution of 56keV FWHM (11%) for 511keV γ-rays. Energy resolution of 1.81keV FWHM for 5.9keV was obtained from the detector of the latter type. In order to analyze noise characteristics of the detector-preamplifier assembly, the equivalent noise charge (ENC) was measured as a function of the amplifier shaping time for the high-resolution detector. This analysis shows that parallel white noise and /1/f noise were dominant noise sources in the detector system. Current-voltage characteristics of the TlBr detector with a small Peltier cooler were also measured. Significant reduction of the detector leakage current was observed for the cooled detectors.

  17. A synchrotron radiation microtomography system for the analysis of trabecular bone samples.

    PubMed

    Salomé, M; Peyrin, F; Cloetens, P; Odet, C; Laval-Jeantet, A M; Baruchel, J; Spanne, P

    1999-10-01

    X-ray computed microtomography is particularly well suited for studying trabecular bone architecture, which requires three-dimensional (3-D) images with high spatial resolution. For this purpose, we describe a three-dimensional computed microtomography (microCT) system using synchrotron radiation, developed at ESRF. Since synchrotron radiation provides a monochromatic and high photon flux x-ray beam, it allows high resolution and a high signal-to-noise ratio imaging. The principle of the system is based on truly three-dimensional parallel tomographic acquisition. It uses a two-dimensional (2-D) CCD-based detector to record 2-D radiographs of the transmitted beam through the sample under different angles of view. The 3-D tomographic reconstruction, performed by an exact 3-D filtered backprojection algorithm, yields 3-D images with cubic voxels. The spatial resolution of the detector was experimentally measured. For the application to bone investigation, the voxel size was set to 6.65 microm, and the experimental spatial resolution was found to be 11 microm. The reconstructed linear attenuation coefficient was calibrated from hydroxyapatite phantoms. Image processing tools are being developed to extract structural parameters quantifying trabecular bone architecture from the 3-D microCT images. First results on human trabecular bone samples are presented.

  18. Geochemistry of Dissolved Organic Matter in a Spatially Highly Resolved Groundwater Petroleum Hydrocarbon Plume Cross-Section.

    PubMed

    Dvorski, Sabine E-M; Gonsior, Michael; Hertkorn, Norbert; Uhl, Jenny; Müller, Hubert; Griebler, Christian; Schmitt-Kopplin, Philippe

    2016-06-07

    At numerous groundwater sites worldwide, natural dissolved organic matter (DOM) is quantitatively complemented with petroleum hydrocarbons. To date, research has been focused almost exclusively on the contaminants, but detailed insights of the interaction of contaminant biodegradation, dominant redox processes, and interactions with natural DOM are missing. This study linked on-site high resolution spatial sampling of groundwater with high resolution molecular characterization of DOM and its relation to groundwater geochemistry across a petroleum hydrocarbon plume cross-section. Electrospray- and atmospheric pressure photoionization (ESI, APPI) ultrahigh resolution mass spectrometry (FT-ICR-MS) revealed a strong interaction between DOM and reactive sulfur species linked to microbial sulfate reduction, i.e., the key redox process involved in contaminant biodegradation. Excitation emission matrix (EEM) fluorescence spectroscopy in combination with Parallel Factor Analysis (PARAFAC) modeling attributed DOM samples to specific contamination traits. Nuclear magnetic resonance (NMR) spectroscopy evaluated the aromatic compounds and their degradation products in samples influenced by the petroleum contamination and its biodegradation. Our orthogonal high resolution analytical approach enabled a comprehensive molecular level understanding of the DOM with respect to in situ petroleum hydrocarbon biodegradation and microbial sulfate reduction. The role of natural DOM as potential cosubstrate and detoxification reactant may improve future bioremediation strategies.

  19. Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)

    NASA Technical Reports Server (NTRS)

    Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.

    2001-01-01

    A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.

  20. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  1. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  2. A SPECT Scanner for Rodent Imaging Based on Small-Area Gamma Cameras

    NASA Astrophysics Data System (ADS)

    Lage, Eduardo; Villena, José L.; Tapias, Gustavo; Martinez, Naira P.; Soto-Montenegro, Maria L.; Abella, Mónica; Sisniega, Alejandro; Pino, Francisco; Ros, Domènec; Pavia, Javier; Desco, Manuel; Vaquero, Juan J.

    2010-10-01

    We developed a cost-effective SPECT scanner prototype (rSPECT) for in vivo imaging of rodents based on small-area gamma cameras. Each detector consists of a position-sensitive photomultiplier tube (PS-PMT) coupled to a 30 x 30 Nal(Tl) scintillator array and electronics attached to the PS-PMT sockets for adapting the detector signals to an in-house developed data acquisition system. The detector components are enclosed in a lead-shielded case with a receptacle to insert the collimators. System performance was assessed using 99mTc for a high-resolution parallel-hole collimator, and for a 0.75-mm pinhole collimator with a 60° aperture angle and a 42-mm collimator length. The energy resolution is about 10.7% of the photopeak energy. The overall system sensitivity is about 3 cps/μCi/detector and planar spatial resolution ranges from 2.4 mm at 1 cm source-to-collimator distance to 4.1 mm at 4.5 cm with parallel-hole collimators. With pinhole collimators planar spatial resolution ranges from 1.2 mm at 1 cm source-to-collimator distance to 2.4 mm at 4.5 cm; sensitivity at these distances ranges from 2.8 to 0.5 cps/μCi/detector. Tomographic hot-rod phantom images are presented together with images of bone, myocardium and brain of living rodents to demonstrate the feasibility of preclinical small-animal studies with the rSPECT.

  3. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    NASA Astrophysics Data System (ADS)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  4. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples.

    PubMed

    Kim, Stephan D; Luo, Jiajun; Buchholz, D Bruce; Chang, R P H; Grayson, M

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  5. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  6. Multiplexed high resolution soft x-ray RIXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Y.-D.; Voronov, D.; Warwick, T.

    2016-07-27

    High-resolution Resonance Inelastic X-ray Scattering (RIXS) is a technique that allows us to probe the electronic excitations of complex materials with unprecedented precision. However, the RIXS process has a low cross section, compounded by the fact that the optical spectrometers used to analyze the scattered photons can only collect a small solid angle and overall have a small efficiency. Here we present a method to significantly increase the throughput of RIXS systems, by energy multiplexing, so that a complete RIXS map of scattered intensity versus photon energy in and photon energy out can be recorded simultaneously{sup 1}. This parallel acquisitionmore » scheme should provide a gain in throughput of over 100.. A system based on this principle, QERLIN, is under construction at the Advanced Light Source (ALS).« less

  7. A Multi-Functional Microelectrode Array Featuring 59760 Electrodes, 2048 Electrophysiology Channels, Stimulation, Impedance Measurement and Neurotransmitter Detection Channels.

    PubMed

    Dragas, Jelena; Viswam, Vijay; Shadmani, Amir; Chen, Yihui; Bounik, Raziyeh; Stettler, Alexander; Radivojevic, Milos; Geissler, Sydney; Obien, Marie; Müller, Jan; Hierlemann, Andreas

    2017-06-01

    Biological cells are characterized by highly complex phenomena and processes that are, to a great extent, interdependent. To gain detailed insights, devices designed to study cellular phenomena need to enable tracking and manipulation of multiple cell parameters in parallel; they have to provide high signal quality and high spatiotemporal resolution. To this end, we have developed a CMOS-based microelectrode array system that integrates six measurement and stimulation functions, the largest number to date. Moreover, the system features the largest active electrode array area to date (4.48×2.43 mm 2 ) to accommodate 59,760 electrodes, while its power consumption, noise characteristics, and spatial resolution (13.5 μm electrode pitch) are comparable to the best state-of-the-art devices. The system includes: 2,048 action-potential (AP, bandwidth: 300 Hz to 10 kHz) recording units, 32 local-field-potential (LFP, bandwidth: 1 Hz to 300 Hz) recording units, 32 current recording units, 32 impedance measurement units, and 28 neurotransmitter detection units, in addition to the 16 dual-mode voltage-only or current/voltage-controlled stimulation units. The electrode array architecture is based on a switch matrix, which allows for connecting any measurement/stimulation unit to any electrode in the array and for performing different measurement/stimulation functions in parallel.

  8. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  9. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  10. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  11. Development of Parallel Architectures for Sensor Array Processing. Volume 1

    DTIC Science & Technology

    1993-08-01

    required for the DOA estimation [ 1-7]. The Multiple Signal Classification ( MUSIC ) [ 1] and the Estimation of Signal Parameters by Rotational...manifold and the estimated subspace. Although MUSIC is a high resolution algorithm, it has several drawbacks including the fact that complete knowledge of...thoroughly, MUSIC algorithm was selected to develop special purpose hardware for real time computation. Summary of the MUSIC algorithm is as follows

  12. IceT users' guide and reference.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    2011-01-01

    The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.

  13. Multiple double cross-section transmission electron microscope sample preparation of specific sub-10 nm diameter Si nanowire devices.

    PubMed

    Gignac, Lynne M; Mittal, Surbhi; Bangsaruntip, Sarunya; Cohen, Guy M; Sleight, Jeffrey W

    2011-12-01

    The ability to prepare multiple cross-section transmission electron microscope (XTEM) samples from one XTEM sample of specific sub-10 nm features was demonstrated. Sub-10 nm diameter Si nanowire (NW) devices were initially cross-sectioned using a dual-beam focused ion beam system in a direction running parallel to the device channel. From this XTEM sample, both low- and high-resolution transmission electron microscope (TEM) images were obtained from six separate, specific site Si NW devices. The XTEM sample was then re-sectioned in four separate locations in a direction perpendicular to the device channel: 90° from the original XTEM sample direction. Three of the four XTEM samples were successfully sectioned in the gate region of the device. From these three samples, low- and high-resolution TEM images of the Si NW were taken and measurements of the NW diameters were obtained. This technique demonstrated the ability to obtain high-resolution TEM images in directions 90° from one another of multiple, specific sub-10 nm features that were spaced 1.1 μm apart.

  14. Constraints on Circumstellar Dust Grain Sizes from High Spatial Resolution Observations in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Bloemhof, E. E.; Danen, R. M.; Gwinn, C. R.

    1996-01-01

    We describe how high spatial resolution imaging of circumstellar dust at a wavelength of about 10 micron, combined with knowledge of the source spectral energy distribution, can yield useful information about the sizes of the individual dust grains responsible for the infrared emission. Much can be learned even when only upper limits to source size are available. In parallel with high-resolution single-telescope imaging that may resolve the more extended mid-infrared sources, we plan to apply these less direct techniques to interpretation of future observations from two-element optical interferometers, where quite general arguments may be made despite only crude imaging capability. Results to date indicate a tendency for circumstellar grain sizes to be rather large compared to the Mathis-Rumpl-Nordsieck size distribution traditionally thought to characterize dust in the general interstellar medium. This may mean that processing of grains after their initial formation and ejection from circumstellar atmospheres adjusts their size distribution to the ISM curve; further mid-infrared observations of grains in various environments would help to confirm this conjecture.

  15. New functionalities of potassium tantalate niobate deflectors enabled by the coexistence of pre-injected space charge and composition gradient

    NASA Astrophysics Data System (ADS)

    Zhu, Wenbin; Chao, Ju-Hung; Chen, Chang-Jiang; Campbell, Adrian L.; Henry, Michael G.; Yin, Stuart Shizhuo; Hoffman, Robert C.

    2017-10-01

    In most beam steering applications such as 3D printing and in vivo imaging, one of the essential challenges has been high-resolution high-speed multi-dimensional optical beam scanning. Although the pre-injected space charge controlled potassium tantalate niobate (KTN) deflectors can achieve speeds in the nanosecond regime, they deflect in only one dimension. In order to develop a high-resolution high-speed multi-dimensional KTN deflector, we studied the deflection behavior of KTN deflectors in the case of coexisting pre-injected space charge and composition gradient. We find that such coexistence can enable new functionalities of KTN crystal based electro-optic deflectors. When the direction of the composition gradient is parallel to the direction of the external electric field, the zero-deflection position can be shifted, which can reduce the internal electric field induced beam distortion, and thus enhance the resolution. When the direction of the composition gradient is perpendicular to the direction of the external electric field, two-dimensional beam scanning can be achieved by harnessing only one single piece of KTN crystal, which can result in a compact, high-speed two-dimensional deflector. Both theoretical analyses and experiments are conducted, which are consistent with each other. These new functionalities can expedite the usage of KTN deflection in many applications such as high-speed 3D printing, high-speed, high-resolution imaging, and free space broadband optical communication.

  16. Massively parallel sensing of trace molecules and their isotopologues with broadband subharmonic mid-infrared frequency combs

    NASA Astrophysics Data System (ADS)

    Muraviev, A. V.; Smolski, V. O.; Loparo, Z. E.; Vodopyanov, K. L.

    2018-04-01

    Mid-infrared spectroscopy offers supreme sensitivity for the detection of trace gases, solids and liquids based on tell-tale vibrational bands specific to this spectral region. Here, we present a new platform for mid-infrared dual-comb Fourier-transform spectroscopy based on a pair of ultra-broadband subharmonic optical parametric oscillators pumped by two phase-locked thulium-fibre combs. Our system provides fast (7 ms for a single interferogram), moving-parts-free, simultaneous acquisition of 350,000 spectral data points, spaced by a 115 MHz intermodal interval over the 3.1-5.5 µm spectral range. Parallel detection of 22 trace molecular species in a gas mixture, including isotopologues containing isotopes such as 13C, 18O, 17O, 15N, 34S, 33S and deuterium, with part-per-billion sensitivity and sub-Doppler resolution is demonstrated. The technique also features absolute optical frequency referencing to an atomic clock, a high degree of mutual coherence between the two mid-infrared combs with a relative comb-tooth linewidth of 25 mHz, coherent averaging and feasibility for kilohertz-scale spectral resolution.

  17. WaveJava: Wavelet-based network computing

    NASA Astrophysics Data System (ADS)

    Ma, Kun; Jiao, Licheng; Shi, Zhuoer

    1997-04-01

    Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.

  18. High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling

    PubMed Central

    Sung, Kyunghyun; Hargreaves, Brian A

    2013-01-01

    Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540

  19. Grid of Supergiant B[e] Models from HDUST Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Carciofi, A. C.

    2012-12-01

    By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.

  20. Increasing horizontal resolution in numerical weather prediction and climate simulations: illusion or panacea?

    PubMed

    Wedi, Nils P

    2014-06-28

    The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Beyond the resolution limit: subpixel resolution in animals and now in silicon

    NASA Astrophysics Data System (ADS)

    Wilcox, M. J.

    2007-09-01

    Automatic acquisition of aerial threats at thousands of kilometers distance requires high sensitivity to small differences in contrast and high optical quality for subpixel resolution, since targets occupy much less surface area than a single pixel. Targets travel at high speed and break up in the re-entry phase. Target/decoy discrimination at the earliest possible time is imperative. Real time performance requires a multifaceted approach with hyperspectral imaging and analog processing allowing feature extraction in real time. Hyperacuity Systems has developed a prototype chip capable of nonlinear increase in resolution or subpixel resolution far beyond either pixel size or spacing. Performance increase is due to a biomimetic implementation of animal retinas. Photosensitivity is not homogeneous across the sensor surface, allowing pixel parsing. It is remarkably simple to provide this profile to detectors and we showed at least three ways to do so. Individual photoreceptors have a Gaussian sensitivity profile and this nonlinear profile can be exploited to extract high-resolution. Adaptive, analog circuitry provides contrast enhancement, dynamic range setting with offset and gain control. Pixels are processed in parallel within modular elements called cartridges like photo-receptor inputs in fly eyes. These modular elements are connected by a novel function for a cell matrix known as L4. The system is exquisitely sensitive to small target motion and operates with a robust signal under degraded viewing conditions, allowing detection of targets smaller than a single pixel or at greater distance. Therefore, not only is instantaneous feature extraction possible but also subpixel resolution. Analog circuitry increases processing speed with more accurate motion specification for target tracking and identification.

  2. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  3. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  4. Integrated electronics for time-resolved array of single-photon avalanche diodes

    NASA Astrophysics Data System (ADS)

    Acconcia, G.; Crotti, M.; Rech, I.; Ghioni, M.

    2013-12-01

    The Time Correlated Single Photon Counting (TCSPC) technique has reached a prominent position among analytical methods employed in a great variety of fields, from medicine and biology (fluorescence spectroscopy) to telemetry (laser ranging) and communication (quantum cryptography). Nevertheless the development of TCSPC acquisition systems featuring both a high number of parallel channels and very high performance is still an open challenge: to satisfy the tight requirements set by the applications, a fully parallel acquisition system requires not only high efficiency single photon detectors but also a read-out electronics specifically designed to obtain the highest performance in conjunction with these sensors. To this aim three main blocks have been designed: a gigahertz bandwidth front-end stage to directly read the custom technology SPAD array avalanche current, a reconfigurable logic to route the detectors output signals to the acquisition chain and an array of time measurement circuits capable of recording the photon arrival times with picoseconds time resolution and a very high linearity. An innovative architecture based on these three circuits will feature a very high number of detectors to perform a truly parallel spatial or spectral analysis and a smaller number of high performance time-to-amplitude converter offering very high performance and a very high conversion frequency while limiting the area occupation and power dissipation. The routing logic will make the dynamic connection between the two arrays possible in order to guarantee that no information gets lost.

  5. ESiWACE: A Center of Excellence for HPC applications to support cloud resolving earth system modelling

    NASA Astrophysics Data System (ADS)

    Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp

    2017-04-01

    With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.

  6. Rotating single-shot acquisition (RoSA) with composite reconstruction for fast high-resolution diffusion imaging.

    PubMed

    Wen, Qiuting; Kodiweera, Chandana; Dale, Brian M; Shivraman, Giri; Wu, Yu-Chien

    2018-01-01

    To accelerate high-resolution diffusion imaging, rotating single-shot acquisition (RoSA) with composite reconstruction is proposed. Acceleration was achieved by acquiring only one rotating single-shot blade per diffusion direction, and high-resolution diffusion-weighted (DW) images were reconstructed by using similarities of neighboring DW images. A parallel imaging technique was implemented in RoSA to further improve the image quality and acquisition speed. RoSA performance was evaluated by simulation and human experiments. A brain tensor phantom was developed to determine an optimal blade size and rotation angle by considering similarity in DW images, off-resonance effects, and k-space coverage. With the optimal parameters, RoSA MR pulse sequence and reconstruction algorithm were developed to acquire human brain data. For comparison, multishot echo planar imaging (EPI) and conventional single-shot EPI sequences were performed with matched scan time, resolution, field of view, and diffusion directions. The simulation indicated an optimal blade size of 48 × 256 and a 30 ° rotation angle. For 1 × 1 mm 2 in-plane resolution, RoSA was 12 times faster than the multishot acquisition with comparable image quality. With the same acquisition time as SS-EPI, RoSA provided superior image quality and minimum geometric distortion. RoSA offers fast, high-quality, high-resolution diffusion images. The composite image reconstruction is model-free and compatible with various diffusion computation approaches including parametric and nonparametric analyses. Magn Reson Med 79:264-275, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  8. High-Throughput Effect-Directed Analysis Using Downscaled in Vitro Reporter Gene Assays To Identify Endocrine Disruptors in Surface Water

    PubMed Central

    2018-01-01

    Effect-directed analysis (EDA) is a commonly used approach for effect-based identification of endocrine disruptive chemicals in complex (environmental) mixtures. However, for routine toxicity assessment of, for example, water samples, current EDA approaches are considered time-consuming and laborious. We achieved faster EDA and identification by downscaling of sensitive cell-based hormone reporter gene assays and increasing fractionation resolution to allow testing of smaller fractions with reduced complexity. The high-resolution EDA approach is demonstrated by analysis of four environmental passive sampler extracts. Downscaling of the assays to a 384-well format allowed analysis of 64 fractions in triplicate (or 192 fractions without technical replicates) without affecting sensitivity compared to the standard 96-well format. Through a parallel exposure method, agonistic and antagonistic androgen and estrogen receptor activity could be measured in a single experiment following a single fractionation. From 16 selected candidate compounds, identified through nontargeted analysis, 13 could be confirmed chemically and 10 were found to be biologically active, of which the most potent nonsteroidal estrogens were identified as oxybenzone and piperine. The increased fractionation resolution and the higher throughput that downscaling provides allow for future application in routine high-resolution screening of large numbers of samples in order to accelerate identification of (emerging) endocrine disruptors. PMID:29547277

  9. Development of ATHENA mirror modules

    NASA Astrophysics Data System (ADS)

    Collon, Maximilien J.; Vacanti, Giuseppe; Barrière, Nicolas M.; Landgraf, Boris; Günther, Ramses; Vervest, Mark; van der Hoeven, Roy; Dekker, Danielle; Chatbi, Abdel; Girou, David; Sforzini, Jessica; Beijersbergen, Marco W.; Bavdaz, Marcos; Wille, Eric; Fransen, Sebastiaan; Shortt, Brian; Haneveld, Jeroen; Koelewijn, Arenda; Booysen, Karin; Wijnperle, Maurice; van Baren, Coen; Eigenraam, Alexander; Müller, Peter; Krumrey, Michael; Burwitz, Vadim; Pareschi, Giovanni; Massahi, Sonny; Christensen, Finn E.; Della Monica Ferreira, Desirée.; Valsecchi, Giuseppe; Oliver, Paul; Checquer, Ian; Ball, Kevin; Zuknik, Karl-Heinz

    2017-08-01

    Silicon Pore Optics (SPO), developed at cosine with the European Space Agency (ESA) and several academic and industrial partners, provides lightweight, yet stiff, high-resolution x-ray optics. This technology enables ATHENA to reach an unprecedentedly large effective area in the 0.2 - 12 keV band with an angular resolution better than 5''. After developing the technology for 50 m and 20 m focal length, this year has witnessed the first 12 m focal length mirror modules being produced. The technology development is also gaining momentum with three different radii under study: mirror modules for the inner radii (Rmin = 250 mm), outer radii (Rmax = 1500 mm) and middle radii (Rmid = 737 mm) are being developed in parallel.

  10. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.

  11. Scanning tunneling microscope with two-dimensional translator.

    PubMed

    Nichols, J; Ng, K-W

    2011-01-01

    Since the invention of the scanning tunneling microscope (STM), it has been a powerful tool for probing the electronic properties of materials. Typically STM designs capable of obtaining resolution on the atomic scale are limited to a small area which can be probed. We have built an STM capable of coarse motion in two dimensions, the z- and x-directions which are, respectively, parallel and perpendicular to the tip. This allows us to image samples with very high resolution at sites separated by macroscopic distances. This device is a single unit with a compact design making it very stable. It can operate in either a horizontal or vertical configuration and at cryogenic temperatures.

  12. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  13. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  14. Sub-arcsecond observations of the solar X-ray corona

    NASA Technical Reports Server (NTRS)

    Golub, L.; Nystrom, G.; Herant, M.; Kalata, K.; Lovas, I.

    1990-01-01

    Results from a high-resolution multi-layer-coated X-ray imaging telescope, part of the Normal Incidence X-ray Telescope sounding rocket payload are presented. Images of the peak of a two-ribbon flare showed detailed structure within each ribbon, as well as the expected bright arches of emission connecting the ribbons. The number of X-ray bright points is small, consistent with predictions based on the previous solar cycle. Topology of the magnetic structure is complex and highly tangled, implying that the magnetic complexity of the photosphere is paralleled in the corona.

  15. Overlapping MALDI-Mass Spectrometry Imaging for In-Parallel MS and MS/MS Data Acquisition without Sacrificing Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Hansen, Rebecca L.; Lee, Young Jin

    2017-09-01

    Metabolomics experiments require chemical identifications, often through MS/MS analysis. In mass spectrometry imaging (MSI), this necessitates running several serial tissue sections or using a multiplex data acquisition method. We have previously developed a multiplex MSI method to obtain MS and MS/MS data in a single experiment to acquire more chemical information in less data acquisition time. In this method, each raster step is composed of several spiral steps and each spiral step is used for a separate scan event (e.g., MS or MS/MS). One main limitation of this method is the loss of spatial resolution as the number of spiral steps increases, limiting its applicability for high-spatial resolution MSI. In this work, we demonstrate multiplex MS imaging is possible without sacrificing spatial resolution by the use of overlapping spiral steps, instead of spatially separated spiral steps as used in the previous work. Significant amounts of matrix and analytes are still left after multiple spectral acquisitions, especially with nanoparticle matrices, so that high quality MS and MS/MS data can be obtained on virtually the same tissue spot. This method was then applied to visualize metabolites and acquire their MS/MS spectra in maize leaf cross-sections at 10 μm spatial resolution. [Figure not available: see fulltext.

  16. Rapid anatomical brain imaging using spiral acquisition and an expanded signal model.

    PubMed

    Kasper, Lars; Engel, Maria; Barmet, Christoph; Haeberlin, Maximilian; Wilm, Bertram J; Dietrich, Benjamin E; Schmid, Thomas; Gross, Simon; Brunner, David O; Stephan, Klaas E; Pruessmann, Klaas P

    2018-03-01

    We report the deployment of spiral acquisition for high-resolution structural imaging at 7T. Long spiral readouts are rendered manageable by an expanded signal model including static off-resonance and B 0 dynamics along with k-space trajectories and coil sensitivity maps. Image reconstruction is accomplished by inversion of the signal model using an extension of the iterative non-Cartesian SENSE algorithm. Spiral readouts up to 25 ms are shown to permit whole-brain 2D imaging at 0.5 mm in-plane resolution in less than a minute. A range of options is explored, including proton-density and T 2 * contrast, acceleration by parallel imaging, different readout orientations, and the extraction of phase images. Results are shown to exhibit competitive image quality along with high geometric consistency. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Applying LED in full-field optical coherence tomography for gastrointestinal endoscopy

    NASA Astrophysics Data System (ADS)

    Yang, Bor-Wen; Wang, Yu-Yen; Juan, Yu-Shan; Hsu, Sheng-Jie

    2015-08-01

    Optical coherence tomography (OCT) has become an important medical imaging technology due to its non-invasiveness and high resolution. Full-field optical coherence tomography (FF-OCT) is a scanning scheme especially suitable for en face imaging as it employs a CMOS/CCD device for parallel pixels processing. FF-OCT can also be applied to high-speed endoscopic imaging. Applying cylindrical scanning and a right-angle prism, we successfully obtained a 360° tomography of the inner wall of an intestinal cavity through an FF-OCT system with an LED source. The 10-μm scale resolution enables the early detection of gastrointestinal lesions, which can increase detection rates for esophageal, stomach, or vaginal cancer. All devices used in this system can be integrated by MOEMS technology to contribute to the studies of gastrointestinal medicine and advanced endoscopy technology.

  18. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  19. Computational imaging through a fiber-optic bundle

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Dumas, John Paul; Pierce, Mark C.; Bajwa, Waheed U.

    2017-05-01

    Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.

  20. New prototype of acousto-optical radio-wave spectrometer with parallel frequency processing for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Alexandre S.; Chavez Dagostino, Miguel; Arellanes, Adan O.; Aguirre Lopez, Arturo

    2016-09-01

    We develop a multi-band spectrometer with a few spatially parallel optical arms for the combined processing of their data flow. Such multi-band capability has various applications in astrophysical scenarios at different scales: from objects in the distant universe to planetary atmospheres in the Solar system. Each optical arm exhibits original performances to provide parallel multi-band observations with different scales simultaneously. Similar possibility is based on designing each optical arm individually via exploiting different materials for acousto-optical cells operating within various regimes, frequency ranges and light wavelengths from independent light sources. Individual beam shapers provide both the needed incident light polarization and the required apodization to increase the dynamic range of a system. After parallel acousto-optical processing, data flows are united by the joint CCD matrix on the stage of the combined electronic data processing. At the moment, the prototype combines still three bands, i.e. includes three spatial optical arms. The first low-frequency arm operates at the central frequencies 60-80 MHz with frequency bandwidth 40 MHz. The second arm is oriented to middle-frequencies 350-500 MHz with frequency bandwidth 200-300 MHz. The third arm is intended for ultra-high-frequency radio-wave signals about 1.0-1.5 GHz with frequency bandwidth <300 MHz. To-day, this spectrometer has the following preliminary performances. The first arm exhibits frequency resolution 20 KHz; while the second and third arms give the resolution 150-200 KHz. The numbers of resolvable spots are 1500- 2000 depending on the regime of operation. The fourth optical arm at the frequency range 3.5 GHz is currently under construction.

  1. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  2. Study of multi-channel optical system based on the compound eye

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Fu, Yuegang; Liu, Zhiying; Dong, Zhengchao

    2014-09-01

    As an important part of machine vision, compound eye optical systems have the characteristics of high resolution and large FOV. By applying the compound eye optical systems to target detection and recognition, the contradiction between large FOV and high resolution in the traditional single aperture optical systems could be solved effectively and also the parallel processing ability of the optical systems could be sufficiently shown. In this paper, the imaging features of the compound eye optical systems are analyzed. After discussing the relationship between the FOV in each subsystem and the contact ratio of the FOV in the whole system, a method to define the FOV of the subsystem is presented. And a compound eye optical system is designed, which is based on the large FOV synthesized of multi-channels. The compound eye optical system consists with a central optical system and array subsystem, in which the array subsystem is used to capture the target. The high resolution image of the target could be achieved by the central optical system. With the advantage of small volume, light weight and rapid response speed, the optical system could detect the objects which are in 3km and FOV of 60°without any scanning device. The objects in the central field 2w=5.1°could be imaged with high resolution so that the objects could be recognized.

  3. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  4. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  5. Lens-based wavefront sensorless adaptive optics swept source OCT

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-06-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.

  6. Supramolecular organization and chiral resolution of p-terphenyl-m-dicarbonitrile on the Ag(111) surface.

    PubMed

    Marschall, Matthias; Reichert, Joachim; Seufert, Knud; Auwärter, Willi; Klappenberger, Florian; Weber-Bargioni, Alexander; Klyatskaya, Svetlana; Zoppellaro, Giorgio; Nefedov, Alexei; Strunskus, Thomas; Wöll, Christof; Ruben, Mario; Barth, Johannes V

    2010-05-17

    The supramolecular organization and layer formation of the non-linear, prochiral molecule [1, 1';4',1'']-terphenyl-3,3"-dicarbonitrile adsorbed on the Ag(111) surface is investigated by scanning tunneling microscopy (STM) and near-edge X-ray absorption fine-structure spectroscopy (NEXAFS). Upon two-dimensional confinement the molecules are deconvoluted in three stereoisomers, that is, two mirror-symmetric trans- and one cis-species. STM measurements reveal large and regular islands following room temperature deposition, whereby NEXAFS confirms a flat adsorption geometry with the electronic pi-system parallel to the surface plane. The ordering within the expressed supramolecular arrays reflects a substrate templating effect, steric constraints and the operation of weak lateral interactions mainly originating from the carbonitrile endgroups. High-resolution data at room temperature reveal enantiormorphic characteristics of the molecular packing schemes in different domains of the arrays, indicative of chiral resolution during the 2D molecular self-assembly process. At submonolayer coverage supramolecular islands coexist with a disordered fluid phase of highly mobile molecules. Following thermal quenching (down to 6 K) we find extended supramolecular ribbons stabilised again by attractive and directional noncovalent interactions, the formation of which reflects a chiral resolution of trans-species.

  7. Dynamic inundation mapping of Hurricane Harvey flooding in the Houston metro area using hyper-resolution modeling and quantitative image reanalysis

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.

    2017-12-01

    Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.

  8. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  9. PENTACLE: Parallelized particle-particle particle-tree code for planet formation

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori

    2017-10-01

    We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.

  10. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    NASA Astrophysics Data System (ADS)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  11. Experimental study on microsphere assisted nanoscope in non-contact mode

    NASA Astrophysics Data System (ADS)

    Ling, Jinzhong; Li, Dancui; Liu, Xin; Wang, Xiaorui

    2018-07-01

    Microsphere assisted nanoscope was proposed in existing literatures to capture super-resolution images of the nano-structures beneath the microsphere attached on sample surface. In this paper, a microsphere assisted nanoscope working in non-contact mode is designed and demonstrated, in which the microsphere is controlled with a gap separated to sample surface. With a gap, the microsphere is moved in parallel to sample surface non-invasively, so as to observe all the areas of interest. Furthermore, the influence of gap size on image resolution is studied experimentally. Only when the microsphere is close enough to the sample surface, super-resolution image could be obtained. Generally, the resolution decreases when the gap increases as the contribution of evanescent wave disappears. To keep an appropriate gap size, a quantitative method is implemented to estimate the gap variation by observing Newton's rings around the microsphere, serving as a real-time feedback for tuning the gap size. With a constant gap, large-area image with high resolution can be obtained during microsphere scanning. Our study of non-contact mode makes the microsphere assisted nanoscope more practicable and easier to implement.

  12. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  13. Thermal Programmed Desorption of C32 H 66

    NASA Astrophysics Data System (ADS)

    Cisternas, M.; Del Campo, V.; Cabrera, A. L.; Volkmann, U. G.; Hansen, F. Y.; Taub, H.

    2011-03-01

    Alkanes are of interest as prototypes for more complex molecules and membranes. In this work we study the desorption kinetics of dotriacontane C32 adsorbed on Si O2 /Si substrate. We combine in our instrument High Resolution Ellipsometry (HRE) and Thermal Programmed Desorption (TPD). C32 monolayers were deposited in high vacuum from a Knudsen cell on the substrate, monitorizing sample thickness in situ with HRE. Film thickness was in the range of up to 100 AA, forming a parallel bilayer and perpendicular C32 layer. The Mass Spectrometer (RGA) of the TPD section was detecting the shift of the desorption peaks at different heating rates applied to the sample. The mass registered with the RGA was AMU 57 for parallel and perpendicular layers, due to the abundance of this mass value in the disintegration process of C32 in the mass spectrometers ionizer. Moreover, the AMU 57 signal does not interfere with other signals coming from residual gases in the vacuum chamber. The desorption energies obtained were ΔEdes = 11,9 kJ/mol for the perpendicular bilayer and ΔEdes = 23 ,5 kJ/mol for the parallel bilayer.

  14. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    NASA Technical Reports Server (NTRS)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  15. A high-resolution physically-based global flood hazard map

    NASA Astrophysics Data System (ADS)

    Kaheil, Y.; Begnudelli, L.; McCollum, J.

    2016-12-01

    We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.

  16. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  17. Magnetosphere simulations with a high-performance 3D AMR MHD Code

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Dezeeuw, Darren; Groth, Clinton; Powell, Kenneth; Song, Paul

    1998-11-01

    BATS-R-US is a high-performance 3D AMR MHD code for space physics applications running on massively parallel supercomputers. In BATS-R-US the electromagnetic and fluid equations are solved with a high-resolution upwind numerical scheme in a tightly coupled manner. The code is very robust and it is capable of spanning a wide range of plasma parameters (such as β, acoustic and Alfvénic Mach numbers). Our code is highly scalable: it achieved a sustained performance of 233 GFLOPS on a Cray T3E-1200 supercomputer with 1024 PEs. This talk reports results from the BATS-R-US code for the GGCM (Geospace General Circularculation Model) Phase 1 Standard Model Suite. This model suite contains 10 different steady-state configurations: 5 IMF clock angles (north, south, and three equally spaced angles in- between) with 2 IMF field strengths for each angle (5 nT and 10 nT). The other parameters are: solar wind speed =400 km/sec; solar wind number density = 5 protons/cc; Hall conductance = 0; Pedersen conductance = 5 S; parallel conductivity = ∞.

  18. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  19. MR-based source localization for MR-guided HDR brachytherapy

    NASA Astrophysics Data System (ADS)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  20. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  1. Time-frequency model for echo-delay resolution in wideband biosonar.

    PubMed

    Neretti, Nicola; Sanderson, Mark I; Intrator, Nathan; Simmons, James A

    2003-04-01

    A time/frequency model of the bat's auditory system was developed to examine the basis for the fine (approximately 2 micros) echo-delay resolution of big brown bats (Eptesicus fuscus), and its performance at resolving closely spaced FM sonar echoes in the bat's 20-100-kHz band at different signal-to-noise ratios was computed. The model uses parallel bandpass filters spaced over this band to generate envelopes that individually can have much lower bandwidth than the bat's ultrasonic sonar sounds and still achieve fine delay resolution. Because fine delay separations are inside the integration time of the model's filters (approximately 250-300 micros), resolving them means using interference patterns along the frequency dimension (spectral peaks and notches). The low bandwidth content of the filter outputs is suitable for relay of information to higher auditory areas that have intrinsically poor temporal response properties. If implemented in fully parallel analog-digital hardware, the model is computationally extremely efficient and would improve resolution in military and industrial sonar receivers.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dritz, K.W.; Boyle, J.M.

    This paper addresses the problem of measuring and analyzing the performance of fine-grained parallel programs running on shared-memory multiprocessors. Such processors use locking (either directly in the application program, or indirectly in a subroutine library or the operating system) to serialize accesses to global variables. Given sufficiently high rates of locking, the chief factor preventing linear speedup (besides lack of adequate inherent parallelism in the application) is lock contention - the blocking of processes that are trying to acquire a lock currently held by another process. We show how a high-resolution, low-overhead clock may be used to measure both lockmore » contention and lack of parallel work. Several ways of presenting the results are covered, culminating in a method for calculating, in a single multiprocessing run, both the speedup actually achieved and the speedup lost to contention for each lock and to lack of parallel work. The speedup losses are reported in the same units, ''processor-equivalents,'' as the speedup achieved. Both are obtained without having to perform the usual one-process comparison run. We chronicle also a variety of experiments motivated by actual results obtained with our measurement method. The insights into program performance that we gained from these experiments helped us to refine the parts of our programs concerned with communication and synchronization. Ultimately these improvements reduced lock contention to a negligible amount and yielded nearly linear speedup in applications not limited by lack of parallel work. We describe two generally applicable strategies (''code motion out of critical regions'' and ''critical-region fissioning'') for reducing lock contention and one (''lock/variable fusion'') applicable only on certain architectures.« less

  3. Luminosity variations in several parallel auroral arcs before auroral breakup

    NASA Astrophysics Data System (ADS)

    Safargaleev, V.; Lyatsky, W.; Tagirov, V.

    1997-08-01

    Variation of the luminosity in two parallel auroral arcs before auroral breakup has been studied by using digitised TV-data with high temporal and spatial resolution. The intervals when a new arc appears near already existing one were chosen for analysis. It is shown, for all cases, that the appearance of a new arc is accompanied by fading or disappearance of another arc. We have named these events out-of-phase events, OP. Another type of luminosity variation is characterised by almost simultaneous enhancement of intensity in the both arcs (in-phase event, IP). The characteristic time of IP events is 10-20 s, whereas OP events last about one minute. Sometimes out-of-phase events begin as IP events. The possible mechanisms for OP and IP events are discussed.

  4. Applications of massively parallel computers in telemetry processing

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon

    1994-01-01

    Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).

  5. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Treesearch

    Matthew Parks; Richard Cronn; Aaron Liston

    2009-01-01

    We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome) generated using multiplexed massively parallel sequencing. We found that 30/33 ingroup nodes resolved wlth > 95-percent bootstrap support; this is a substantial improvement relative...

  6. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  7. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  8. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  9. The utility of micro-CT and MRI in the assessment of longitudinal growth of liver metastases in a preclinical model of colon carcinoma.

    PubMed

    Pandit, Prachi; Johnston, Samuel M; Qi, Yi; Story, Jennifer; Nelson, Rendon; Johnson, G Allan

    2013-04-01

    Liver is a common site for distal metastases in colon and rectal cancer. Numerous clinical studies have analyzed the relative merits of different imaging modalities for detection of liver metastases. Several exciting new therapies are being investigated in preclinical models. But, technical challenges in preclinical imaging make it difficult to translate conclusions from clinical studies to the preclinical environment. This study addresses the technical challenges of preclinical magnetic resonance imaging (MRI) and micro-computed tomography (CT) to enable comparison of state-of-the-art methods for following metastatic liver disease. We optimized two promising preclinical protocols to enable a parallel longitudinal study tracking metastatic human colon carcinoma growth in a mouse model: T2-weighted MRI using two-shot PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and contrast-enhanced micro-CT using a liposomal contrast agent. Both methods were tailored for high throughput with attention to animal support and anesthesia to limit biological stress. Each modality has its strengths. Micro-CT permitted more rapid acquisition (<10 minutes) with the highest spatial resolution (88-micron isotropic resolution). But detection of metastatic lesions requires the use of a blood pool contrast agent, which could introduce a confound in the evaluation of new therapies. MRI was slower (30 minutes) and had lower anisotropic spatial resolution. But MRI eliminates the need for a contrast agent and the contrast-to-noise between tumor and normal parenchyma was higher, making earlier detection of small lesions possible. Both methods supported a relatively high-throughput, longitudinal study of the development of metastatic lesions. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  10. High-resolution, high-throughput imaging with a multibeam scanning electron microscope

    PubMed Central

    EBERLE, AL; MIKULA, S; SCHALEK, R; LICHTMAN, J; TATE, ML KNOTHE; ZEIDLER, D

    2015-01-01

    Electron–electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. Lay Description The composition of our world and our bodies on the very small scale has always fascinated people, making them search for ways to make this visible to the human eye. Where light microscopes reach their resolution limit at a certain magnification, electron microscopes can go beyond. But their capability of visualizing extremely small features comes at the cost of a very small field of view. Some of the questions researchers seek to answer today deal with the ultrafine structure of brains, bones or computer chips. Capturing these objects with electron microscopes takes a lot of time – maybe even exceeding the time span of a human being – or new tools that do the job much faster. A new type of scanning electron microscope scans with 61 electron beams in parallel, acquiring 61 adjacent images of the sample at the same time a conventional scanning electron microscope captures one of these images. In principle, the multibeam scanning electron microscope’s field of view is 61 times larger and therefore coverage of the sample surface can be accomplished in less time. This enables researchers to think about large-scale projects, for example in the rather new field of connectomics. A very good introduction to imaging a brain at nanometre resolution can be found within course material from Harvard University on http://www.mcb80x.org/# as featured media entitled ‘connectomics’. PMID:25627873

  11. Spatial variability of the Black Sea surface temperature from high resolution modeling and satellite measurements

    NASA Astrophysics Data System (ADS)

    Mizyuk, Artem; Senderov, Maxim; Korotaev, Gennady

    2016-04-01

    Large number of numerical ocean models were implemented for the Black Sea basin during last two decades. They reproduce rather similar structure of synoptical variability of the circulation. Since 00-s numerical studies of the mesoscale structure are carried out using high performance computing (HPC). With the growing capacity of computing resources it is now possible to reconstruct the Black Sea currents with spatial resolution of several hundreds meters. However, how realistic these results can be? In the proposed study an attempt is made to understand which spatial scales are reproduced by ocean model in the Black Sea. Simulations are made using parallel version of NEMO (Nucleus for European Modelling of the Ocean). A two regional configurations with spatial resolutions 5 km and 2.5 km are described. Comparison of the SST from simulations with two spatial resolutions shows rather qualitative difference of the spatial structures. Results of high resolution simulation are compared also with satellite observations and observation-based products from Copernicus using spatial correlation and spectral analysis. Spatial scales of correlations functions for simulated and observed SST are rather close and differs much from satellite SST reanalysis. Evolution of spectral density for modelled SST and reanalysis showed agreed time periods of small scales intensification. Using of the spectral analysis for satellite measurements is complicated due to gaps. The research leading to this results has received funding from Russian Science Foundation (project № 15-17-20020)

  12. CZT drift strip detectors for high energy astrophysics

    NASA Astrophysics Data System (ADS)

    Kuvvetli, I.; Budtz-Jørgensen, C.; Caroli, E.; Auricchio, N.

    2010-12-01

    Requirements for X- and gamma ray detectors for future High Energy Astrophysics missions include high detection efficiency and good energy resolution as well as fine position sensitivity even in three dimensions. We report on experimental investigations on the CZT drift detector developed DTU Space. It is operated in the planar transverse field (PTF) mode, with the purpose of demonstrating that the good energy resolution of the CZT drift detector can be combined with the high efficiency of the PTF configuration. Furthermore, we demonstrated and characterized the 3D sensing capabilities of this detector configuration. The CZT drift strip detector (10 mm×10 mm×2.5 mm) was characterized in both standard illumination geometry, Photon Parallel Field (PPF) configuration and in PTF configuration. The detection efficiency and energy resolution are compared for both configurations . The PTF configuration provided a higher efficiency in agreement with calculations. The detector energy resolution was found to be the same (3 keV FWHM at 122 keV) in both in PPF and PTF . The depth sensing capabilities offered by drift strip detectors was investigated by illuminating the detector using a collimated photon beam of 57Co radiation in PTF configuration. The width (300μm FWHM at 122 keV) of the measured depth distributions was almost equal to the finite beam size. However, the data indicate that the best achievable depth resolution for the CZT drift detector is 90μm FWHM at 122 keV and that it is determined by the electronic noise from the setup.

  13. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  14. Lg Attenuation Anisotropy Across the Western US

    NASA Astrophysics Data System (ADS)

    Phillips, W. S.; Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2017-12-01

    The USArray has allowed us to map seismic attenuation of local and regional phases to unprecedented spatial extent and resolution. Following standard mantle Pn velocity anisotropy methods, we have incorporated azimuthal anisotropy into our tomographic inversion of high-frequency Lg amplitudes. The Lg is a crustal shear phase made up of many trapped modes, thus results can be considered to be crustal averages. Azimuthal anisotropy reduces residual variance by just over 10% for 1.5-3 Hz Lg. We observe a median anisotropic variation of 12%, and a high of 50% in the Salton Trough. Low attenuation (high-Q) directions run parallel to topographic fabric and major strike slip faults in tectonically active areas, and often run parallel to mantle shear wave splitting directions in stable regions. Tradeoffs are of concern, and synthetic tests show that elongated attenuation anomalies will produce anisotropy artifacts, but of factors 2-3 times lower than observations. In particular, the strength of a long, narrow high-Q anomaly will trade off with high-Q directions parallel to the long axis, while an elongated low-Q anomaly will trade off with high-Q directions perpendicular to the long axis. We observe an elongated low-Q anomaly associated with the Walker Lane; however, observed high-Q directions run parallel to the long axis of this anomaly, opposite to the tradeoff effect, supporting the anisotropic observation, and implying that the effect may be underestimated. Further, we observe an elongated high-Q anomaly associated with the Great Valley and Sierra Nevada that runs across the long axis, again opposite to the tradeoff effect. This study was performed using waveforms, event locations and phase picks made available by IRIS, NEIC and ANF, and processing was done using semi-automated means, thus this is a technique that can be applied quickly to study crustal anisotropy over large areas when appropriate station density is available.

  15. Analysis of very-high-resolution Galileo images of Europa: Implications for small-scale structure and surface evolution

    NASA Astrophysics Data System (ADS)

    Leonard, E. J.; Pappalardo, R. T.; Yin, A.; Prockter, L. M.; Patthoff, D. A.

    2014-12-01

    The Galileo Solid State Imager (SSI) recorded nine very high-resolution frames (8 at 12 m/pixel and 1 at 6 m/pixel) during the E12 flyby of Europa in Dec. 1997. To understand the implications for the small-scale structure and evolution of Europa, we mosaicked these frames (observations 12ESMOTTLE01 and 02, incidence ≈18°, emission ≈77°) into their regional context (part of observation 11ESREGMAP01, 220 m/pixel, incidence ≈74°, emission ≈23°), despite their very different viewing and lighting conditions. We created a map of geological units based on morphology, structure, and albedo along with stereoscopic images where the frames overlapped. The highly diverse units range from: high albedo sub-parallel ridge and grooved terrain; to variegated-albedo hummocky terrain; to low albedo and relatively smooth terrain. We classified and analyzed the diverse units solely based on the high-resolution image mosaic, prior to comparison to the context image, to obtain an in-depth look at possible surface evolution and underlying formational processes. We infer that some of these units represent different stages and forms of resurfacing, including cryovolcanic and tectonic resurfacing. However, significant morphological variation among units in the region indicates that there are different degrees of resurfacing at work. We have created candidate morphological sequences that provide insight into the conversion of ridged plains to chaotic terrain—generally, a process of subduing formerly sharp features through tectonic modification and/or cryovolcanism. When the map of the high-resolution area is compared to the regional context, features that appear to be one unit at regional resolution are comprised of several distinct units at high resolution, and features that appear to be smooth in the context image are found to show distinct textures. Moreover, in the context image, transitions from ridged units to disrupted units appear to be gradual; however the high-resolution image reveals them to be abrupt, suggesting tectonic control of these boundaries. These discrepancies could have important implications for a future landed exploration.

  16. A fast mass spring model solver for high-resolution elastic objects

    NASA Astrophysics Data System (ADS)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  17. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  18. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  19. Renal magnetic resonance angiography at 3.0 Tesla using a 32-element phased-array coil system and parallel imaging in 2 directions.

    PubMed

    Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-09-01

    The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.

  20. A high-throughput, multi-channel photon-counting detector with picosecond timing

    NASA Astrophysics Data System (ADS)

    Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-06-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  1. Optimized 14 + 1 receive coil array and position system for 3D high-resolution MRI of dental and maxillomandibular structures.

    PubMed

    Sedlacik, Jan; Kutzner, Daniel; Khokale, Arun; Schulze, Dirk; Fiehler, Jens; Celik, Turgay; Gareis, Daniel; Smeets, Ralf; Friedrich, Reinhard E; Heiland, Max; Assaf, Alexandre T

    2016-01-01

    The purpose of this study was to design, build and test a multielement receive coil array and position system, which is optimized for three-dimensional (3D) high-resolution dental and maxillomandibular MRI with high patient comfort. A 14 + 1 coil array and positioning system, allowing easy handling by the technologists, reproducible positioning of the patients and high patient comfort, was tested with three healthy volunteers using a 3.0-T MRI machine (Siemens Skyra; Siemens Medical Solutions, Erlangen, Germany). High-resolution 3D T1 weighted, water excitation T1 weighted and fat-saturated T2 weighted imaging sequences were scanned, and 3D image data were reformatted in different orientations and curvatures to aid diagnosis. The high number of receiving coils and the comfortable positioning of the coil array close to the patient's face provided a high signal-to-noise ratio and allowed high quality, high resolution, 3D image data to be acquired within reasonable scan times owing to the possibility of parallel image acquisition acceleration. Reformatting the isotropic 3D image data in different views is helpful for diagnosis, e.g. panoramic reconstruction. The visibility of soft tissues such as the mandibular canal, nutritive canals and periodontal ligaments was exquisite. The optimized MRI receive coil array and positioning system for dental and oral-maxillofacial imaging provides a valuable tool for detecting and diagnosing pathologies in dental and oral-maxillofacial structures while avoiding radiation dose. The high patient comfort, as achieved by our design, is very crucial, since image artefacts due to movement or failing to complete the examination jeopardize the diagnostic value of MRI examinations.

  2. The optical frequency comb fibre spectrometer

    PubMed Central

    Coluccelli, Nicola; Cassinerio, Marco; Redding, Brandon; Cao, Hui; Laporta, Paolo; Galzerano, Gianluca

    2016-01-01

    Optical frequency comb sources provide thousands of precise and accurate optical lines in a single device enabling the broadband and high-speed detection required in many applications. A main challenge is to parallelize the detection over the widest possible band while bringing the resolution to the single comb-line level. Here we propose a solution based on the combination of a frequency comb source and a fibre spectrometer, exploiting all-fibre technology. Our system allows for simultaneous measurement of 500 isolated comb lines over a span of 0.12 THz in a single acquisition; arbitrarily larger span are demonstrated (3,500 comb lines over 0.85 THz) by doing sequential acquisitions. The potential for precision measurements is proved by spectroscopy of acetylene at 1.53 μm. Being based on all-fibre technology, our system is inherently low-cost, lightweight and may lead to the development of a new class of broadband high-resolution spectrometers. PMID:27694981

  3. Low-temperature THz time domain waveguide spectrometer with butt-coupled emitter and detector crystal.

    PubMed

    Qiao, W; Stephan, D; Hasselbeck, M; Liang, Q; Dekorsy, T

    2012-08-27

    A compact high-resolution THz time-domain waveguide spectrometer that is operated inside a cryostat is demonstrated. A THz photo-Dember emitter and a ZnTe electro-optic detection crystal are directly attached to a parallel copper-plate waveguide. This allows the THz beam to be excited and detected entirely inside the cryostat, obviating the need for THz-transparent windows or external THz mirrors. Since no external bias for the emitter is required, no electric feed-through into the cryostat is necessary. Using asynchronous optical sampling, high resolution THz spectra are obtained in the frequency range from 0.2 to 2.0 THz. The THz emission from the photo-Dember emitter and the absorption spectrum of 1,2-dicyanobenzene film are measured as a function of temperature. An absorption peak around 750 GHz of 1,2-dicyanobenzene displays a blue shift with increasing temperature.

  4. Single-cell imaging tools for brain energy metabolism: a review

    PubMed Central

    San Martín, Alejandro; Sotelo-Hitschfeld, Tamara; Lerchundi, Rodrigo; Fernández-Moncada, Ignacio; Ceballo, Sebastian; Valdebenito, Rocío; Baeza-Lehnert, Felipe; Alegría, Karin; Contreras-Baeza, Yasna; Garrido-Gerter, Pamela; Romero-Gómez, Ignacio; Barros, L. Felipe

    2014-01-01

    Abstract. Neurophotonics comes to light at a time in which advances in microscopy and improved calcium reporters are paving the way toward high-resolution functional mapping of the brain. This review relates to a parallel revolution in metabolism. We argue that metabolism needs to be approached both in vitro and in vivo, and that it does not just exist as a low-level platform but is also a relevant player in information processing. In recent years, genetically encoded fluorescent nanosensors have been introduced to measure glucose, glutamate, ATP, NADH, lactate, and pyruvate in mammalian cells. Reporting relative metabolite levels, absolute concentrations, and metabolic fluxes, these sensors are instrumental for the discovery of new molecular mechanisms. Sensors continue to be developed, which together with a continued improvement in protein expression strategies and new imaging technologies, herald an exciting era of high-resolution characterization of metabolism in the brain and other organs. PMID:26157964

  5. SOLAR WIND TURBULENCE FROM MHD TO SUB-ION SCALES: HIGH-RESOLUTION HYBRID SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franci, Luca; Verdini, Andrea; Landi, Simone

    2015-05-10

    We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and themore » parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.« less

  6. Solar Wind Turbulence from MHD to Sub-ion Scales: High-resolution Hybrid Simulations

    NASA Astrophysics Data System (ADS)

    Franci, Luca; Verdini, Andrea; Matteini, Lorenzo; Landi, Simone; Hellinger, Petr

    2015-05-01

    We present results from a high-resolution and large-scale hybrid (fluid electrons and particle-in-cell protons) two-dimensional numerical simulation of decaying turbulence. Two distinct spectral regions (separated by a smooth break at proton scales) develop with clear power-law scaling, each one occupying about a decade in wavenumbers. The simulation results simultaneously exhibit several properties of the observed solar wind fluctuations: spectral indices of the magnetic, kinetic, and residual energy spectra in the magnetohydrodynamic (MHD) inertial range along with a flattening of the electric field spectrum, an increase in magnetic compressibility, and a strong coupling of the cascade with the density and the parallel component of the magnetic fluctuations at sub-proton scales. Our findings support the interpretation that in the solar wind, large-scale MHD fluctuations naturally evolve beyond proton scales into a turbulent regime that is governed by the generalized Ohm’s law.

  7. Development of a stereo analysis algorithm for generating topographic maps using interactive techniques of the MPP

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1987-01-01

    A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.

  8. Algorithmic trends in computational fluid dynamics; The Institute for Computer Applications in Science and Engineering (ICASE)/LaRC Workshop, NASA Langley Research Center, Hampton, VA, US, Sep. 15-17, 1991

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)

    1993-01-01

    The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.

  9. Highly multiplexed subcellular RNA sequencing in situ

    PubMed Central

    Lee, Je Hyuk; Daugharthy, Evan R.; Scheiman, Jonathan; Kalhor, Reza; Ferrante, Thomas C.; Yang, Joyce L.; Terry, Richard; Jeanty, Sauveur S. F.; Li, Chao; Amamoto, Ryoji; Peters, Derek T.; Turczyk, Brian M.; Marblestone, Adam H.; Inverso, Samuel A.; Bernard, Amy; Mali, Prashant; Rios, Xavier; Aach, John; Church, George M.

    2014-01-01

    Understanding the spatial organization of gene expression with single nucleotide resolution requires localizing the sequences of expressed RNA transcripts within a cell in situ. Here we describe fluorescent in situ RNA sequencing (FISSEQ), in which stably cross-linked cDNA amplicons are sequenced within a biological sample. Using 30-base reads from 8,742 genes in situ, we examined RNA expression and localization in human primary fibroblasts using a simulated wound healing assay. FISSEQ is compatible with tissue sections and whole mount embryos, and reduces the limitations of optical resolution and noisy signals on single molecule detection. Our platform enables massively parallel detection of genetic elements, including gene transcripts and molecular barcodes, and can be used to investigate cellular phenotype, gene regulation, and environment in situ. PMID:24578530

  10. Fiber-optic dosimeters for radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Enbang; Archer, James

    2017-10-01

    According to the figures provided by the World Health Organization, cancer is a leading cause of death worldwide, accounting for 8.8 million deaths in 2015. Radiation therapy, which uses x-rays to destroy or injure cancer cells, has become one of the most important modalities to treat the primary cancer or advanced cancer. The newly developed microbeam radiation therapy (MRT), which uses highly collimated, quasi-parallel arrays of x-ray microbeams (typically 50 μm wide and separated by 400 μm) produced by synchrotron sources, represents a new paradigm in radiotherapy and has shown great promise in pre-clinical studies on different animal models. Measurements of the absorbed dose distribution of microbeams are vitally important for clinical acceptance of MRT and for developing quality assurance systems for MRT, hence are a challenging and important task for radiation dosimetry. On the other hand, during the traditional LINAC based radiotherapy and breast cancer brachytherapy, skin dose measurements and treatment planning also require a high spatial resolution, tissue equivalent, on-line dosimeter that is both economical and highly reliable. Such a dosimeter currently does not exist and remains a challenge in the development of radiation dosimetry. High resolution, water equivalent, optical and passive x-ray dosimeters have been developed and constructed by using plastic scintillators and optical fibers. The dosimeters have peak edge-on spatial resolutions ranging from 50 to 500 microns in one dimension, with a 10 micron resolution dosimeter under development. The developed fiber-optic dosimeters have been test with both LINAC and synchrotron x-ray beams. This work demonstrates that water-equivalent and high spatial resolution radiation detection can be achieved with scintillators and optical fiber systems. Among other advantages, the developed fiber-optic probes are also passive, energy independent, and radiation hard.

  11. Regional Climate Simulation with a Variable Resolution Stretched Grid GCM: The Regional Down-Scaling Effects

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.

    1999-01-01

    The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.

  12. A high-resolution, nucleosome position map of C. elegans reveals a lack of universal sequence-dictated positioning

    PubMed Central

    Valouev, Anton; Ichikawa, Jeffrey; Tonthat, Thaisan; Stuart, Jeremy; Ranade, Swati; Peckham, Heather; Zeng, Kathy; Malek, Joel A.; Costa, Gina; McKernan, Kevin; Sidow, Arend; Fire, Andrew; Johnson, Steven M.

    2008-01-01

    Using the massively parallel technique of sequencing by oligonucleotide ligation and detection (SOLiD; Applied Biosystems), we have assessed the in vivo positions of more than 44 million putative nucleosome cores in the multicellular genetic model organism Caenorhabditis elegans. These analyses provide a global view of the chromatin architecture of a multicellular animal at extremely high density and resolution. While we observe some degree of reproducible positioning throughout the genome in our mixed stage population of animals, we note that the major chromatin feature in the worm is a diversity of allowed nucleosome positions at the vast majority of individual loci. While absolute positioning of nucleosomes can vary substantially, relative positioning of nucleosomes (in a repeated array structure likely to be maintained at least in part by steric constraints) appears to be a significant property of chromatin structure. The high density of nucleosomal reads enabled a substantial extension of previous analysis describing the usage of individual oligonucleotide sequences along the span of the nucleosome core and linker. We release this data set, via the UCSC Genome Browser, as a resource for the high-resolution analysis of chromatin conformation and DNA accessibility at individual loci within the C. elegans genome. PMID:18477713

  13. An efficient photogrammetric stereo matching method for high-resolution images

    NASA Astrophysics Data System (ADS)

    Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao

    2016-12-01

    Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.

  14. A high-resolution peak fractionation approach for streamlined screening of nuclear-factor-E2-related factor-2 activators in Salvia miltiorrhiza.

    PubMed

    Zhang, Hui; Luo, Li-Ping; Song, Hui-Peng; Hao, Hai-Ping; Zhou, Ping; Qi, Lian-Wen; Li, Ping; Chen, Jun

    2014-01-24

    Generation of a high-purity fraction library for efficiently screening active compounds from natural products is challenging because of their chemical diversity and complex matrices. In this work, a strategy combining high-resolution peak fractionation (HRPF) with a cell-based assay was proposed for target screening of bioactive constituents from natural products. In this approach, peak fractionation was conducted under chromatographic conditions optimized for high-resolution separation of the natural product extract. The HRPF approach was automatically performed according to the predefinition of certain peaks based on their retention times from a reference chromatographic profile. The corresponding HRPF database was collected with a parallel mass spectrometer to ensure purity and characterize the structures of compounds in the various fractions. Using this approach, a set of 75 peak fractions on the microgram scale was generated from 4mg of the extract of Salvia miltiorrhiza. After screening by an ARE-luciferase reporter gene assay, 20 diterpene quinones were selected and identified, and 16 of these compounds were reported to possess novel Nrf2 activation activity. Compared with conventional fixed-time interval fractionation, the HRPF approach could significantly improve the efficiency of bioactive compound discovery and facilitate the uncovering of minor active components. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Template-directed atomically precise self-organization of perfectly ordered parallel cerium silicide nanowire arrays on Si(110)-16 × 2 surfaces.

    PubMed

    Hong, Ie-Hong; Liao, Yung-Cheng; Tsai, Yung-Feng

    2013-11-05

    The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process.

  16. Template-directed atomically precise self-organization of perfectly ordered parallel cerium silicide nanowire arrays on Si(110)-16 × 2 surfaces

    PubMed Central

    2013-01-01

    The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process. PMID:24188092

  17. Online High Temporal Resolution Measurement of Atmospheric Sulfate and Sulfur Trioxide with a Light Emitting Diode and Liquid Core Waveguide-Based Sensor.

    PubMed

    Tian, Yong; Shen, Huiyan; Wang, Qiang; Liu, Aifeng; Gao, Wei; Chen, Xu-Wei; Chen, Ming-Li; Zhao, Zongshan

    2018-06-13

    High temporal resolution components analysis is still a great challenge for the frontier of atmospheric aerosol research. Here, an online high time resolution method for monitoring soluble sulfate and sulfur trioxide in atmospheric aerosols was developed by integrating a membrane-based parallel plate denuder, a particle collector, and a liquid waveguide capillary cell into a flow injection analysis system. The BaCl 2 solution (containing HCl, glycerin, and ethanol) was enabled to quantitatively transform sulfate into a well-distributed BaSO 4 solution for turbidimetric detection. The time resolution for monitoring the soluble sulfate and sulfur trioxide was 15 h -1 . The limits of detection were 86 and 7.3 μg L -1 ( S/ N = 3) with a 20 and 200 μL SO 4 2- solution injection, respectively. Both the interday and intraday precision values (relative standard deviation) were less than 6.0%. The analytical results of the certificated reference materials (GBW(E)08026 and GNM-M07117-2013) were identical to the certified values (no significant difference at a 95% confidence level). The validity and practicability of the developed device were also evaluated during a firecracker day and a routine day, obviously revealing the continuous variance in atmospheric sulfate and sulfur trioxide in both interday and intraday studies.

  18. High-Speed Microscale Optical Tracking Using Digital Frequency-Domain Multiplexing.

    PubMed

    Maclachlan, Robert A; Riviere, Cameron N

    2009-06-01

    Position-sensitive detectors (PSDs), or lateral-effect photodiodes, are commonly used for high-speed, high-resolution optical position measurement. This paper describes the instrument design for multidimensional position and orientation measurement based on the simultaneous position measurement of multiple modulated sources using frequency-domain-multiplexed (FDM) PSDs. The important advantages of this optical configuration in comparison with laser/mirror combinations are that it has a large angular measurement range and allows the use of a probe that is small in comparison with the measurement volume. We review PSD characteristics and quantitative resolution limits, consider the lock-in amplifier measurement system as a communication link, discuss the application of FDM to PSDs, and make comparisons with time-domain techniques. We consider the phase-sensitive detector as a multirate DSP problem, explore parallels with Fourier spectral estimation and filter banks, discuss how to choose the modulation frequencies and sample rates that maximize channel isolation under design constraints, and describe efficient digital implementation. We also discuss hardware design considerations, sensor calibration, probe construction and calibration, and 3-D measurement by triangulation using two sensors. As an example, we characterize the resolution, speed, and accuracy of an instrument that measures the position and orientation of a 10 mm × 5 mm probe in 5 degrees of freedom (DOF) over a 30-mm cube with 4-μm peak-to-peak resolution at 1-kHz sampling.

  19. Three-dimensional high-resolution ultrasonic imaging of the eye

    NASA Astrophysics Data System (ADS)

    Silverman, Ronald H.; Lizzi, Frederick L.; Kalisz, Andrew; Coleman, D. J.

    2000-04-01

    Very high frequency (50 MHz) ultrasound provides spatial resolution on the order of 30 microns axially by 60 microns laterally. Our aim was to reconstruct the three-dimensional anatomy of the eye in the full detail permitted by this fine- scale transducer resolution. We scanned the eyes of human subjects and anesthetized rabbits in a sequence of parallel planes 50 microns apart. Within each scan plane, vectors were also spaced 50 microns apart. Radio-frequency data were digitized at a rate of 250 MHz or higher. A series of spectrum analysis and segmentation algorithms was applied to data acquired in each plane; the outputs of these procedures were used to produce color-coded 3-D representations of the sclera, iris and ciliary processes to enhance 3-D volume rendered presentation. We visualized the radial pattern of individual ciliary processes in humans and rabbits and the geodetic web of supporting connections between the ciliary processes and iris that exist only in the rabbit. By acquiring data such that adjacent vectors and planes are separated by less than the transducer's lateral resolution, we were able to visualize structures, such as the ciliary web, that had not been seen before in-vivo. Our techniques offer the possibility of high- precision imaging and measurement of anterior segment structures. This would be relevant in monitoring of glaucoma, tumors, foreign bodies and other clinical conditions.

  20. "One-Stop Shop": Free-Breathing Dynamic Contrast-Enhanced Magnetic Resonance Imaging of the Kidney Using Iterative Reconstruction and Continuous Golden-Angle Radial Sampling.

    PubMed

    Riffel, Philipp; Zoellner, Frank G; Budjan, Johannes; Grimm, Robert; Block, Tobias K; Schoenberg, Stefan O; Hausmann, Daniel

    2016-11-01

    The purpose of the present study was to evaluate a recently introduced technique for free-breathing dynamic contrast-enhanced renal magnetic resonance imaging (MRI) applying a combination of radial k-space sampling, parallel imaging, and compressed sensing. The technique allows retrospective reconstruction of 2 motion-suppressed sets of images from the same acquisition: one with lower temporal resolution but improved image quality for subjective image analysis, and one with high temporal resolution for quantitative perfusion analysis. In this study, 25 patients underwent a kidney examination, including a prototypical fat-suppressed, golden-angle radial stack-of-stars T1-weighted 3-dimensional spoiled gradient-echo examination (GRASP) performed after contrast agent administration during free breathing. Images were reconstructed at temporal resolutions of 55 spokes per frame (6.2 seconds) and 13 spokes per frame (1.5 seconds). The GRASP images were evaluated by 2 blinded radiologists. First, the reconstructions with low temporal resolution underwent subjective image analysis: the radiologists assessed the best arterial phase and the best renal phase and rated image quality score for each patient on a 5-point Likert-type scale.In addition, the diagnostic confidence was rated according to a 3-point Likert-type scale. Similarly, respiratory motion artifacts and streak artifacts were rated according to a 3-point Likert-type scale.Then, the reconstructions with high temporal resolution were analyzed with a voxel-by-voxel deconvolution approach to determine the renal plasma flow, and the results were compared with values reported in previous literature. Reader 1 and reader 2 rated the overall image quality score for the best arterial phase and the best renal phase with a median image quality score of 4 (good image quality) for both phases, respectively. A high diagnostic confidence (median score of 3) was observed. There were no respiratory motion artifacts in any of the patients. Streak artifacts were present in all of the patients, but did not compromise diagnostic image quality.The estimated renal plasma flow was slightly higher (295 ± 78 mL/100 mL per minute) than reported in previous MRI-based studies, but also closer to the physiologically expected value. Dynamic, motion-suppressed contrast-enhanced renal MRI can be performed in high diagnostic quality during free breathing using a combination of golden-angle radial sampling, parallel imaging, and compressed sensing. Both morphologic and quantitative functional information can be acquired within a single acquisition.

  1. Electrical capacitance volume tomography with high contrast dielectrics using a cuboid sensor geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    2007-05-01

    An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  2. Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  3. Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction

    PubMed Central

    Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark

    2017-01-01

    Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961

  4. Homology-based hydrogen bond information improves crystallographic structures in the PDB.

    PubMed

    van Beusekom, Bart; Touw, Wouter G; Tatineni, Mahidhar; Somani, Sandeep; Rajagopal, Gunaretnam; Luo, Jinquan; Gilliland, Gary L; Perrakis, Anastassis; Joosten, Robbie P

    2018-03-01

    The Protein Data Bank (PDB) is the global archive for structural information on macromolecules, and a popular resource for researchers, teachers, and students, amassing more than one million unique users each year. Crystallographic structure models in the PDB (more than 100,000 entries) are optimized against the crystal diffraction data and geometrical restraints. This process of crystallographic refinement typically ignored hydrogen bond (H-bond) distances as a source of information. However, H-bond restraints can improve structures at low resolution where diffraction data are limited. To improve low-resolution structure refinement, we present methods for deriving H-bond information either globally from well-refined high-resolution structures from the PDB-REDO databank, or specifically from on-the-fly constructed sets of homologous high-resolution structures. Refinement incorporating HOmology DErived Restraints (HODER), improves geometrical quality and the fit to the diffraction data for many low-resolution structures. To make these improvements readily available to the general public, we applied our new algorithms to all crystallographic structures in the PDB: using massively parallel computing, we constructed a new instance of the PDB-REDO databank (https://pdb-redo.eu). This resource is useful for researchers to gain insight on individual structures, on specific protein families (as we demonstrate with examples), and on general features of protein structure using data mining approaches on a uniformly treated dataset. © 2017 The Protein Society.

  5. eWaterCycle: A high resolution global hydrological model

    NASA Astrophysics Data System (ADS)

    van de Giesen, Nick; Bierkens, Marc; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2014-05-01

    In 2013, the eWaterCycle project was started, which has the ambitious goal to run a high resolution global hydrological model. Starting point was the PCR-GLOBWB built by Utrecht University. The software behind this model will partially be re-engineered in order to enable to run it in a High Performance Computing (HPC) environment. The aim is to have a spatial resolution of 1km x 1km. The idea is also to run the model in real-time and forecasting mode, using data assimilation. An on-demand hydraulic model will be available for detailed flow and flood forecasting in support of navigation and disaster management. The project faces a set of scientific challenges. First, to enable the model to run in a HPC environment, model runs were analyzed to examine on which parts of the program most CPU time was spent. These parts were re-coded in Open MPI to allow for parallel processing. Different parallelization strategies are thinkable. In our case, it was decided to use watershed logic as a first step to distribute the analysis. There is rather limited recent experience with HPC in hydrology and there is much to be learned and adjusted, both on the hydrological modeling side and the computer science side. For example, an interesting early observation was that hydrological models are, due to their localized parameterization, much more memory intensive than models of sister-disciplines such as meteorology and oceanography. Because it would be deadly to have to swap information between CPU and hard drive, memory management becomes crucial. A standard Ensemble Kalman Filter (enKF) would, for example, have excessive memory demands. To circumvent these problems, an alternative to the enKF was developed that produces equivalent results. This presentation shows the most recent results from the model, including a 5km x 5km simulation and a proof of concept for the new data assimilation approach. Finally, some early ideas about financial sustainability of an operational global hydrological model are presented.

  6. A Successful Test of Parallel Replication Teams in Teaching Research Methods

    ERIC Educational Resources Information Center

    Standing, Lionel G.; Astrologo, Lisa; Benbow, Felecia F.; Cyr-Gauthier, Chelsea S.; Williams, Charlotte A.

    2016-01-01

    This paper describes the novel use of parallel student teams from a research methods course to perform a replication study, and suggests that this approach offers pedagogical benefits for both students and teachers, as well as potentially contributing to a resolution of the replication crisis in psychology today. Four teams, of five undergraduates…

  7. Extremely high resolution 3D electrical resistivity tomography to depict archaeological subsurface structures

    NASA Astrophysics Data System (ADS)

    Al-Saadi, Osamah; Schmidt, Volkmar; Becken, Michael; Fritsch, Thomas

    2017-04-01

    Electrical resistivity tomography (ERT) methods have been increasingly used in various shallow depth archaeological prospections in the last few decades. These non-invasive techniques are very useful in saving time, costs, and efforts. Both 2D and 3D ERT techniques are used to obtain detailed images of subsurface anomalies. In two surveyed areas near Nonnweiler (Germany), we present the results of the full 3D setup with a roll-along technique and of the quasi-3D setup (parallel and orthogonal profiles in dipole-dipole configuration). In area A, a dipole-dipole array with 96 electrodes in a uniform rectangular survey grid has been used in full 3D to investigate a presumed Roman building. A roll-along technique has been utilized to cover a large part of the archaeological site with an electrode spacing of 1 meter and with 0.5 meter for a more detailed image. Additional dense parallel 2D profiles have been carried out in dipole-dipole array with 0.25 meter electrode spacing and 0.25 meter between adjacent profiles in both direction for higher- resolution subsurface images. We have designed a new field procedure, which used an electrode array fixed in a frame. This facilitates efficient field operation, which comprised 2376 electrode positions. With the quasi 3D imaging, we confirmed the full 3D inversion model but at a much better resolution. In area B, dense parallel 2D profiles were directly used to survey the second target with also 0.25 meter electrode spacing and profiles separation respectively. The same field measurement design has been utilized and comprised 9648 electrode positions in total. The quasi-3D inversion results clearly revealed the main structures of the Roman construction. These ERT inversion results coincided well with the archaeological excavation, which has been done in some parts of this area. The ERT result successfully images parts from the walls and also smaller internal structures of the Roman building.

  8. New insights on multiple seismic uplift on the Main Frontal Thrust near the Ratu river, Eastern Nepal using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Karakas, Cagil; Tapponnier, Paul; Nath Sapkota, Soma; Coudurier Curveur, Aurelie; Ildefonso, Sorvigenaleon; Gao, Mingxing; Bollinger, Laurent; Klinger, Yann

    2016-04-01

    The number of localities along the Main Frontal Thrust, between 85°49' to 86°27' E, where new data corroborates the surface emergence of the great M ≈ 8.4, 1934 Bihar-Nepal and 1255 AD earthquakes has increased over the past years. Here we show new high-resolution, quantitative evidences of surface rupture and co-seismic uplift near the Ratu river area. We present a refined map of uplifted terrace surfaces and abandoned paleo-channels truncated by the MFT, obtained by the combination of newly acquired high resolution Digital Elevation Models from Total station, Terrestrial Lidar Scanner (TLS), Unmanned Aerial Vehicle (UAV) and kinematic GPS surveys. In the Ratu valley, using these new high-resolution topographic datasets, we identify six and possibly seven distinct terrace levels uplifted parallel to the riverbed, lying unconformably on top of folded Siwaliks. Several sets of measurements may be taken to imply broadly characteristic increments of throw during sequences of at least six to seven events of riverbed abandonment related to co-seismic uplifts. Newly collected detrital charcoals from several pits and from a rejuvenated paleoseismological wall will help assess more precisely uplift and shortening rates over the length of segments of the MFT east and west of Bardibas. A regional comparison of comparable long-term paleoseismological data at other sites along the 1934 rupture length is in progress.

  9. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  10. Imaging doppler lidar for wind turbine wake profiling

    DOEpatents

    Bossert, David J.

    2015-11-19

    An imaging Doppler lidar (IDL) enables the measurement of the velocity distribution of a large volume, in parallel, and at high spatial resolution in the wake of a wind turbine. Because the IDL is non-scanning, it can be orders of magnitude faster than conventional coherent lidar approaches. Scattering can be obtained from naturally occurring aerosol particles. Furthermore, the wind velocity can be measured directly from Doppler shifts of the laser light, so the measurement can be accomplished at large standoff and at wide fields-of-view.

  11. Assimilation of the AVISO Altimetry Data into the Ocean Dynamics Model with a High Spatial Resolution Using Ensemble Optimal Interpolation (EnOI)

    NASA Astrophysics Data System (ADS)

    Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.

    2018-01-01

    A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.

  12. Genome-wide mapping of mutations at single-nucleotide resolution for protein, metabolic and genome engineering.

    PubMed

    Garst, Andrew D; Bassalo, Marcelo C; Pines, Gur; Lynch, Sean A; Halweg-Edwards, Andrea L; Liu, Rongming; Liang, Liya; Wang, Zhiwen; Zeitoun, Ramsey; Alexander, William G; Gill, Ryan T

    2017-01-01

    Improvements in DNA synthesis and sequencing have underpinned comprehensive assessment of gene function in bacteria and eukaryotes. Genome-wide analyses require high-throughput methods to generate mutations and analyze their phenotypes, but approaches to date have been unable to efficiently link the effects of mutations in coding regions or promoter elements in a highly parallel fashion. We report that CRISPR-Cas9 gene editing in combination with massively parallel oligomer synthesis can enable trackable editing on a genome-wide scale. Our method, CRISPR-enabled trackable genome engineering (CREATE), links each guide RNA to homologous repair cassettes that both edit loci and function as barcodes to track genotype-phenotype relationships. We apply CREATE to site saturation mutagenesis for protein engineering, reconstruction of adaptive laboratory evolution experiments, and identification of stress tolerance and antibiotic resistance genes in bacteria. We provide preliminary evidence that CREATE will work in yeast. We also provide a webtool to design multiplex CREATE libraries.

  13. Anisotropy in Third-Order Nonlinear Optical Susceptibility of a Squarylium Dye in a Nematic Liquid Crystal

    NASA Astrophysics Data System (ADS)

    Jin, Zhao-Hui; Li, Zhong-Yu; Kasatani, Kazuo; Okamoto, Hiroaki

    2006-03-01

    A squarylium dye is dissolved in 4-cyano-4'-pentylbiphenyl (5CB) and oriented by sandwiching mixtures between two pieces of rubbed glass plates. The optical absorption spectra of the oriented squarylium dye-5CB layers exhibit high anisotropy. The third-order nonlinear optical responses and susceptibilities χ(3)e of squarylium dye in 5CB are measured with light polarizations parallel and perpendicular to the orientational direction by the resonant femtosecond degenerate four-wave mixing (DFWM) technique. Temporal profiles of the DFWM signal of the oriented squarylium dye-5CB layers with light polarizations parallel and perpendicular to the orientational direction are measured with a time resolution of 0.3 ps (FWHM), and are found to consist of two components, i.e., the coherent instantaneous nonlinear response and slow response due to the formation of excited molecules. A high anisotropic ratio of χ(3)e, 10.8±1.2, is observed for the oriented layers.

  14. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  15. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  16. Parallel detecting super-resolution microscopy using correlation based image restoration

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Zhu, Dazhao; Kuang, Cuifang; Liu, Xu

    2017-12-01

    A novel approach to achieve the image restoration is proposed in which each detector's relative position in the detector array is no longer a necessity. We can identify each detector's relative location by extracting a certain area from one of the detector's image and scanning it on other detectors' images. According to this location, we can generate the point spread functions (PSF) for each detector and perform deconvolution for image restoration. Equipped with this method, the microscope with discretionally designed detector array can be easily constructed without the concern of exact relative locations of detectors. The simulated results and experimental results show the total improvement in resolution with a factor of 1.7 compared to conventional confocal fluorescence microscopy. With the significant enhancement in resolution and easiness for application of this method, this novel method should have potential for a wide range of application in fluorescence microscopy based on parallel detecting.

  17. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  18. High spatial resolution technique for SPECT using a fan-beam collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichihar, T.; Nambu, K.; Motomura, N.

    1993-08-01

    The physical characteristics of the collimator cause degradation of resolution with increasing distance from the collimator surface. A new convolutional backprojection algorithm has been derived for fanbeam SPECT data without rebinding into parallel beam geometry. The projections are filtered and then backprojected into the area within an isosceles triangle whose vertex is the focal point of the fan-beam and whose base is the fan-beam collimator face, and outside of the circle whose center is located midway between the focal point and the center of rotation and whose diameter is the distance between the focal point and the center of rotation.more » Consequently the backprojected area is close to the collimator surface. This algorithm has been implemented on a GCA-9300A SPECT system showing good results with both phantom and patient studies. The SPECT transaxial resolution was 4.6mm FWHM (reconstructed image matrix size of 256x256) at the center of SPECT FOV using UHR (ultra-high-resolution) fan beam collimators for brain study. Clinically, Tc-99m HMPAO and Tc-99m ECD brain data were reconstructed using this algorithm. The reconstruction results were compared with MRI images of the same slice position and showed significantly improved over results obtained with standard reconstruction algorithms.« less

  19. The parallel reaction monitoring method contributes to a highly sensitive polyubiquitin chain quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsuchiya, Hikaru; Tanaka, Keiji, E-mail: tanaka-kj@igakuken.or.jp; Saeki, Yasushi, E-mail: saeki-ys@igakuken.or.jp

    2013-06-28

    Highlights: •The parallel reaction monitoring method was applied to ubiquitin quantification. •The ubiquitin PRM method is highly sensitive even in biological samples. •Using the method, we revealed that Ufd4 assembles the K29-linked ubiquitin chain. -- Abstract: Ubiquitylation is an essential posttranslational protein modification that is implicated in a diverse array of cellular functions. Although cells contain eight structurally distinct types of polyubiquitin chains, detailed function of several chain types including K29-linked chains has remained largely unclear. Current mass spectrometry (MS)-based quantification methods are highly inefficient for low abundant atypical chains, such as K29- and M1-linked chains, in complex mixtures thatmore » typically contain highly abundant proteins. In this study, we applied parallel reaction monitoring (PRM), a quantitative, high-resolution MS method, to quantify ubiquitin chains. The ubiquitin PRM method allows us to quantify 100 attomole amounts of all possible ubiquitin chains in cell extracts. Furthermore, we quantified ubiquitylation levels of ubiquitin-proline-β-galactosidase (Ub-P-βgal), a historically known model substrate of the ubiquitin fusion degradation (UFD) pathway. In wild-type cells, Ub-P-βgal is modified with ubiquitin chains consisting of 21% K29- and 78% K48-linked chains. In contrast, K29-linked chains are not detected in UFD4 knockout cells, suggesting that Ufd4 assembles the K29-linked ubiquitin chain(s) on Ub-P-βgal in vivo. Thus, the ubiquitin PRM is a novel, useful, quantitative method for analyzing the highly complicated ubiquitin system.« less

  20. A new implementation of full resolution SBAS-DInSAR processing chain for the effective monitoring of structures and infrastructures

    NASA Astrophysics Data System (ADS)

    Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele

    2017-04-01

    The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.

  1. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  2. 3D sensitivity encoded ellipsoidal MR spectroscopic imaging of gliomas at 3T☆

    PubMed Central

    Ozturk-Isik, Esin; Chen, Albert P.; Crane, Jason C.; Bian, Wei; Xu, Duan; Han, Eric T.; Chang, Susan M.; Vigneron, Daniel B.; Nelson, Sarah J.

    2010-01-01

    Purpose The goal of this study was to implement time efficient data acquisition and reconstruction methods for 3D magnetic resonance spectroscopic imaging (MRSI) of gliomas at a field strength of 3T using parallel imaging techniques. Methods The point spread functions, signal to noise ratio (SNR), spatial resolution, metabolite intensity distributions and Cho:NAA ratio of 3D ellipsoidal, 3D sensitivity encoding (SENSE) and 3D combined ellipsoidal and SENSE (e-SENSE) k-space sampling schemes were compared with conventional k-space data acquisition methods. Results The 3D SENSE and e-SENSE methods resulted in similar spectral patterns as the conventional MRSI methods. The Cho:NAA ratios were highly correlated (P<.05 for SENSE and P<.001 for e-SENSE) with the ellipsoidal method and all methods exhibited significantly different spectral patterns in tumor regions compared to normal appearing white matter. The geometry factors ranged between 1.2 and 1.3 for both the SENSE and e-SENSE spectra. When corrected for these factors and for differences in data acquisition times, the empirical SNRs were similar to values expected based upon theoretical grounds. The effective spatial resolution of the SENSE spectra was estimated to be same as the corresponding fully sampled k-space data, while the spectra acquired with ellipsoidal and e-SENSE k-space samplings were estimated to have a 2.36–2.47-fold loss in spatial resolution due to the differences in their point spread functions. Conclusion The 3D SENSE method retained the same spatial resolution as full k-space sampling but with a 4-fold reduction in scan time and an acquisition time of 9.28 min. The 3D e-SENSE method had a similar spatial resolution as the corresponding ellipsoidal sampling with a scan time of 4:36 min. Both parallel imaging methods provided clinically interpretable spectra with volumetric coverage and adequate SNR for evaluating Cho, Cr and NAA. PMID:19766422

  3. Near-field electromagnetic holography for high-resolution analysis of network interactions in neuronal tissue

    PubMed Central

    Kjeldsen, Henrik D.; Kaiser, Marcus; Whittington, Miles A.

    2015-01-01

    Background Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. New method Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. Results The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. Comparison with existing methods The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Conclusions Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. PMID:26026581

  4. Near-field electromagnetic holography for high-resolution analysis of network interactions in neuronal tissue.

    PubMed

    Kjeldsen, Henrik D; Kaiser, Marcus; Whittington, Miles A

    2015-09-30

    Brain function is dependent upon the concerted, dynamical interactions between a great many neurons distributed over many cortical subregions. Current methods of quantifying such interactions are limited by consideration only of single direct or indirect measures of a subsample of all neuronal population activity. Here we present a new derivation of the electromagnetic analogy to near-field acoustic holography allowing high-resolution, vectored estimates of interactions between sources of electromagnetic activity that significantly improves this situation. In vitro voltage potential recordings were used to estimate pseudo-electromagnetic energy flow vector fields, current and energy source densities and energy dissipation in reconstruction planes at depth into the neural tissue parallel to the recording plane of the microelectrode array. The properties of the reconstructed near-field estimate allowed both the utilization of super-resolution techniques to increase the imaging resolution beyond that of the microelectrode array, and facilitated a novel approach to estimating causal relationships between activity in neocortical subregions. The holographic nature of the reconstruction method allowed significantly better estimation of the fine spatiotemporal detail of neuronal population activity, compared with interpolation alone, beyond the spatial resolution of the electrode arrays used. Pseudo-energy flow vector mapping was possible with high temporal precision, allowing a near-realtime estimate of causal interaction dynamics. Basic near-field electromagnetic holography provides a powerful means to increase spatial resolution from electrode array data with careful choice of spatial filters and distance to reconstruction plane. More detailed approaches may provide the ability to volumetrically reconstruct activity patterns on neuronal tissue, but the ability to extract vectored data with the method presented already permits the study of dynamic causal interactions without bias from any prior assumptions on anatomical connectivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Implementation of Helioseismic Data Reduction and Diagnostic Techniques on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Korzennik, Sylvain

    1997-01-01

    Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.

  6. Contrasting landform perception with varied radar illumination geometries and at simulated resolutions of Venera and Magellan

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Arvidson, R. E.

    1989-01-01

    The high sensitivity of imaging radars to slope at moderate to low incidence angles enhances the perception of linear topography on images. It reveals broad spatial patterns that are essential to landform mapping and interpretation. As radar responses are strongly directional, the ability to discriminate linear features on images varies with their orientation. Landforms that appear prominent on images where they are transverse to the illumination may be obscure to indistinguishable on images where they are parallel to it. Landform detection is also influenced by the spatial resolution in radar images. Seasat radar images of the Gran Desierto Dunes complex, Sonora, Mexico; the Appalachian Valley and Ridge Province; and accreted terranes in eastern interior Alaska were processed to simulate both Venera 15 and 16 images (1000 to 3000 km resolution) and image data expected from the Magellan mission (120 to 300 m resolution. The Gran Desierto Dunes are not discernable in the Venera simulation, whereas the higher resolution Magellan simulation shows dominant dune patterns produced from differential erosion of the rocks. The Magellan simulation also shows that fluvial processes have dominated erosion and exposure of the folds.

  7. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  8. Three-dimensional through-time radial GRAPPA for renal MR angiography.

    PubMed

    Wright, Katherine L; Lee, Gregory R; Ehses, Philipp; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-10-01

    To achieve high temporal and spatial resolution for contrast-enhanced time-resolved MR angiography exams (trMRAs), fast imaging techniques such as non-Cartesian parallel imaging must be used. In this study, the three-dimensional (3D) through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) method is used to reconstruct highly accelerated stack-of-stars data for time-resolved renal MRAs. Through-time radial GRAPPA has been recently introduced as a method for non-Cartesian GRAPPA weight calibration, and a similar concept can also be used in 3D acquisitions. By combining different sources of calibration information, acquisition time can be reduced. Here, different GRAPPA weight calibration schemes are explored in simulation, and the results are applied to reconstruct undersampled stack-of-stars data. Simulations demonstrate that an accurate and efficient approach to 3D calibration is to combine a small number of central partitions with as many temporal repetitions as exam time permits. These findings were used to reconstruct renal trMRA data with an in-plane acceleration factor as high as 12.6 with respect to the Nyquist sampling criterion, where the lowest root mean squared error value of 16.4% was achieved when using a calibration scheme with 8 partitions, 16 repetitions, and a 4 projection × 8 read point segment size. 3D through-time radial GRAPPA can be used to successfully reconstruct highly accelerated non-Cartesian data. By using in-plane radial undersampling, a trMRA can be acquired with a temporal footprint less than 4s/frame with a spatial resolution of approximately 1.5 mm × 1.5 mm × 3 mm. © 2014 Wiley Periodicals, Inc.

  9. A 10 mK scanning tunneling microscope operating in ultra high vacuum and high magnetic fields.

    PubMed

    Assig, Maximilian; Etzkorn, Markus; Enders, Axel; Stiepany, Wolfgang; Ast, Christian R; Kern, Klaus

    2013-03-01

    We present design and performance of a scanning tunneling microscope (STM) that operates at temperatures down to 10 mK providing ultimate energy resolution on the atomic scale. The STM is attached to a dilution refrigerator with direct access to an ultra high vacuum chamber allowing in situ sample preparation. High magnetic fields of up to 14 T perpendicular and up to 0.5 T parallel to the sample surface can be applied. Temperature sensors mounted directly at the tip and sample position verified the base temperature within a small error margin. Using a superconducting Al tip and a metallic Cu(111) sample, we determined an effective temperature of 38 ± 1 mK from the thermal broadening observed in the tunneling spectra. This results in an upper limit for the energy resolution of ΔE = 3.5 kBT = 11.4 ± 0.3 μeV. The stability between tip and sample is 4 pm at a temperature of 15 mK as demonstrated by topography measurements on a Cu(111) surface.

  10. Effect of surface morphology on drag and roughness sublayer in flows over regular roughness elements

    NASA Astrophysics Data System (ADS)

    Placidi, Marco; Ganapathisubramani, Bharathram

    2014-11-01

    The effects of systematically varied roughness morphology on bulk drag and on the spatial structure of turbulent boundary layers are examined by performing a series of wind tunnel experiments. In this study, rough surfaces consisting of regularly and uniformly distributed LEGO™ bricks are employed. Twelve different patterns are adopted in order to methodically examine the individual effects of frontal solidity (λF, frontal area of the roughness elements per unit wall-parallel area) and plan solidity (λP, plan area of roughness elements per unit wall-parallel area), on both the bulk drag and the turbulence structure. A floating element friction balance based on Krogstad & Efros (2010) was designed and manufactured to measure the drag generated by the different surfaces. In parallel, high resolution planar and stereoscopic Particle Image Velocimetry (PIV) was applied to investigate the flow features. This talk will focus on the effects of each solidity parameter on the bulk drag and attempt to relate the observed trends to the flow structures in the roughness sublayer. Currently at City University London.

  11. Master-slave interferometry for parallel spectral domain interferometry sensing and versatile 3D optical coherence tomography.

    PubMed

    Podoleanu, Adrian Gh; Bradu, Adrian

    2013-08-12

    Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.

  12. Research on the Application of Fast-steering Mirror in Stellar Interferometer

    NASA Astrophysics Data System (ADS)

    Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.

    2017-07-01

    For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.

  13. Programming new geometry restraints: Parallelity of atomic groups

    DOE PAGES

    Sobolev, Oleg V.; Afonine, Pavel V.; Adams, Paul D.; ...

    2015-08-01

    Improvements in structural biology methods, in particular crystallography and cryo-electron microscopy, have created an increased demand for the refinement of atomic models against low-resolution experimental data. One way to compensate for the lack of high-resolution experimental data is to use a priori information about model geometry that can be utilized in refinement in the form of stereochemical restraints or constraints. Here, the definition and calculation of the restraints that can be imposed on planar atomic groups, in particular the angle between such groups, are described. Detailed derivations of the restraint targets and their gradients are provided so that they canmore » be readily implemented in other contexts. Practical implementations of the restraints, and of associated data structures, in the Computational Crystallography Toolbox( cctbx) are presented.« less

  14. Computational Challenges of 3D Radiative Transfer in Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Jakub, Fabian; Bernhard, Mayer

    2017-04-01

    The computation of radiative heating and cooling rates is one of the most expensive components in todays atmospheric models. The high computational cost stems not only from the laborious integration over a wide range of the electromagnetic spectrum but also from the fact that solving the integro-differential radiative transfer equation for monochromatic light is already rather involved. This lead to the advent of numerous approximations and parameterizations to reduce the cost of the solver. One of the most prominent one is the so called independent pixel approximations (IPA) where horizontal energy transfer is neglected whatsoever and radiation may only propagate in the vertical direction (1D). Recent studies implicate that the IPA introduces significant errors in high resolution simulations and affects the evolution and development of convective systems. However, using fully 3D solvers such as for example MonteCarlo methods is not even on state of the art supercomputers feasible. The parallelization of atmospheric models is often realized by a horizontal domain decomposition, and hence, horizontal transfer of energy necessitates communication. E.g. a cloud's shadow at a low zenith angle will cast a long shadow and potentially needs to communication through a multitude of processors. Especially light in the solar spectral range may travel long distances through the atmosphere. Concerning highly parallel simulations, it is vital that 3D radiative transfer solvers put a special emphasis on parallel scalability. We will present an introduction to intricacies computing 3D radiative heating and cooling rates as well as report on the parallel performance of the TenStream solver. The TenStream is a 3D radiative transfer solver using the PETSc framework to iteratively solve a set of partial differential equation. We investigate two matrix preconditioners, (a) geometric algebraic multigrid preconditioning(MG+GAMG) and (b) block Jacobi incomplete LU (ILU) factorization. The TenStream solver is tested for up to 4096 cores and shows a parallel scaling efficiency of 80-90% on various supercomputers.

  15. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  16. A 24-ch Phased-Array System for Hyperpolarized Helium Gas Parallel MRI to Evaluate Lung Functions.

    PubMed

    Lee, Ray; Johnson, Glyn; Stefanescu, Cornel; Trampel, Robert; McGuinness, Georgeann; Stoeckel, Bernd

    2005-01-01

    Hyperpolarized 3He gas MRI has a serious potential for assessing pulmonary functions. Due to the fact that the non-equilibrium of the gas results in a steady depletion of the signal level over the course of the excitations, the signal-tonoise ratio (SNR) can be independent of the number of the data acquisitions under certain circumstances. This provides a unique opportunity for parallel MRI for gaining both temporal and spatial resolution without reducing SNR. We have built a 24-channel receive / 2-channel transmit phased array system for 3He parallel imaging. Our in vivo experimental results proved that the significant temporal and spatial resolution can be gained at no cost to the SNR. With 3D data acquisition, eight fold (2x4) scan time reduction can be achieved without any aliasing in images. Additionally, a rigid analysis using the low impedance preamplifier for decoupling presented evidence of strong coupling.

  17. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  18. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  19. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  20. Configuration of twins in glass-embedded silver nanoparticles of various origin

    NASA Astrophysics Data System (ADS)

    Hofmeister, H.; Dubiel, M.; Tan, G. L.; Schicke, K.-D.

    2005-09-01

    Structural characterization using high resolution electron microscopy and diffractogram analysis of silver nanoparticles embedded in glass by various routes of fabrication was aimed at revealing the characteristic features of twin faults occuring in such particles. Nearly spherical silver nanoparticles well below 10 nm size embedded in commercial soda-lime silicate float glass have been fabricated either by silver/sodium ion exchange or by Ag+ ion implantation. Twinned nanoparticles, besides single crystalline species, have frequently been observed for both fabrication routes, mainly at sizes above 5 nm, but also at smaller sizes, even around 1 nm. The variety of particle forms comprises single crystalline particles of nearly cuboctahedron shape, particles containing single twin faults, and multiply twinned particles containing parallel twin lamellae, or cyclic twinned segments arranged around axes of fivefold symmetry. Parallel twinning is distinctly favoured by ion implantation whereas cyclic twinning preferably occurs upon ion exchange processing. Regardless of single or repeated twinning, parallel or cyclic twin arrangement, one may classify simple twin faults of regular atomic configuration and compound twin faults whose irregular configuration consists of additional planar defects like associated stacking faults or secondary twin faults. Besides, a particular superstructure composed of parallel twin lamellae of only three atomic layers thickness is observed.

  1. Whistler Waves Driven by Anisotropic Strahl Velocity Distributions: Cluster Observations

    NASA Technical Reports Server (NTRS)

    Vinas, A.F.; Gurgiolo, C.; Nieves-Chinchilla, T.; Gary, S. P.; Goldstein, M. L.

    2010-01-01

    Observed properties of the strahl using high resolution 3D electron velocity distribution data obtained from the Cluster/PEACE experiment are used to investigate its linear stability. An automated method to isolate the strahl is used to allow its moments to be computed independent of the solar wind core+halo. Results show that the strahl can have a high temperature anisotropy (T(perpindicular)/T(parallell) approximately > 2). This anisotropy is shown to be an important free energy source for the excitation of high frequency whistler waves. The analysis suggests that the resultant whistler waves are strong enough to regulate the electron velocity distributions in the solar wind through pitch-angle scattering

  2. Micro-differential scanning calorimeter for liquid biological samples

    DOE PAGES

    Wang, Shuyu; Yu, Shifeng; Siedler, Michael S.; ...

    2016-10-20

    Here, we developed an ultrasensitive micro-DSC (differential scanning calorimeter) for liquid protein sample characterization. Our design integrated vanadium oxide thermistors and flexible polymer substrates with microfluidics chambers to achieve a high sensitivity (6 V/W), low thermal conductivity (0.7 mW/K), high power resolutions (40 nW), and well-defined liquid volume (1 μl) calorimeter sensor in a compact and cost-effective way. Furthermore, we demonstrated the performance of the sensor with lysozyme unfolding. The measured transition temperature and enthalpy change were in accordance with the previous literature data. This micro-DSC could potentially raise the prospect of high-throughput biochemical measurement by parallel operation with miniaturizedmore » sample consumption.« less

  3. NASA/American Cancer Society High-Resolution Flow Cytometry Project-I

    NASA Technical Reports Server (NTRS)

    Thomas, R. A.; Krishan, A.; Robinson, D. M.; Sams, C.; Costa, F.

    2001-01-01

    BACKGROUND: The NASA/American Cancer Society (ACS) flow cytometer can simultaneously analyze the electronic nuclear volume (ENV) and DNA content of cells. This study describes the schematics, resolution, reproducibility, and sensitivity of biological standards analyzed on this unit. METHODS: Calibrated beads and biological standards (lymphocytes, trout erythrocytes [TRBC], calf thymocytes, and tumor cells) were analyzed for ENV versus DNA content. Parallel data (forward scatter versus DNA) from a conventional flow cytometer were obtained. RESULTS: ENV linearity studies yielded an R value of 0.999. TRBC had a coefficient of variation (CV) of 1.18 +/- 0.13. DNA indexes as low as 1.02 were detectable. DNA content of lymphocytes from 42 females was 1.9% greater than that for 60 males, with a noninstrumental variability in total DNA content of 0.5%. The ENV/DNA ratio was constant in 15 normal human tissue samples, but differed in the four animal species tested. The ENV/DNA ratio for a hypodiploid breast carcinoma was 2.3 times greater than that for normal breast tissue. CONCLUSIONS: The high-resolution ENV versus DNA analyses are highly reliable, sensitive, and can be used for the detection of near-diploid tumor cells that are difficult to identify with conventional cytometers. ENV/DNA ratio may be a useful parameter for detection of aneuploid populations.

  4. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  5. G.A.M.E.: GPU-accelerated mixture elucidator.

    PubMed

    Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J

    2017-09-15

    GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .

  6. Gas chromatography fractionation platform featuring parallel flame-ionization detection and continuous high-resolution analyte collection in 384-well plates.

    PubMed

    Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen

    2016-09-02

    Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. At-line nanofractionation with parallel mass spectrometry and bioactivity assessment for the rapid screening of thrombin and factor Xa inhibitors in snake venoms.

    PubMed

    Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen

    2016-02-01

    Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease bioactivity fingerprints in biological studies on venoms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  9. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  10. Quantitative x-ray phase-contrast imaging using a single grating of comparable pitch to sample feature size.

    PubMed

    Morgan, Kaye S; Paganin, David M; Siu, Karen K W

    2011-01-01

    The ability to quantitatively retrieve transverse phase maps during imaging by using coherent x rays often requires a precise grating or analyzer-crystal-based setup. Imaging of live animals presents further challenges when these methods require multiple exposures for image reconstruction. We present a simple method of single-exposure, single-grating quantitative phase contrast for a regime in which the grating period is much greater than the effective pixel size. A grating is used to create a high-visibility reference pattern incident on the sample, which is distorted according to the complex refractive index and thickness of the sample. The resolution, along a line parallel to the grating, is not restricted by the grating spacing, and the detector resolution becomes the primary determinant of the spatial resolution. We present a method of analysis that maps the displacement of interrogation windows in order to retrieve a quantitative phase map. Application of this analysis to the imaging of known phantoms shows excellent correspondence.

  11. Investigation of low-loss spectra and near-edge fine structure of polymers by PEELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckmann, W.

    Transmission electron microscopy has changed from a purely imaging method to an analytical method. This has been facilitated particularly by equipping electron microscopes with energy filters and with parallel electron energy loss spectrometers (PEELS). Because of their relatively high energy resolution (1 to 2 eV) they provide information not only on the elements present but also on the type of bonds between the molecular groups. Polymers are radiation sensitive and the molecular bonds change as the spectrum is being recorded. This can be observed with PEEL spectrometers that are able to record spectra with high sensitivity and in rapid succession.

  12. Thermal management of tungsten leading edges in DIII-D

    DOE PAGES

    Nygren, Richard E.; Rudakov, Dmitry L.; Murphy, Christopher; ...

    2017-04-29

    The DiMES materials probe exposed tungsten blocks with 0.3 and 1 mm high leading edges to DIII-D He plasmas in 2015 and 2016 viewed with high resolution IRTV. The 1-mm edge may have reached >2400° C in a 3-s shot with a (parallel) heat load of ~50 MW/m 2 and ~10 MW/m 2 on the surface based on modeling. The experiments support ITER. Leading edges were also a concern in the DIII-D Metal Tile Experiment in 2016. Two toroidal rings of divertor tiles had W-coated molybdenum inserts 50 mm wide radially. This study presents data and thermal analyses.

  13. Thermal management of tungsten leading edges in DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nygren, Richard E.; Rudakov, Dmitry L.; Murphy, Christopher

    The DiMES materials probe exposed tungsten blocks with 0.3 and 1 mm high leading edges to DIII-D He plasmas in 2015 and 2016 viewed with high resolution IRTV. The 1-mm edge may have reached >2400° C in a 3-s shot with a (parallel) heat load of ~50 MW/m 2 and ~10 MW/m 2 on the surface based on modeling. The experiments support ITER. Leading edges were also a concern in the DIII-D Metal Tile Experiment in 2016. Two toroidal rings of divertor tiles had W-coated molybdenum inserts 50 mm wide radially. This study presents data and thermal analyses.

  14. Exploring New Challenges of High-Resolution SWOT Satellite Altimetry with a Regional Model of the Solomon Sea

    NASA Astrophysics Data System (ADS)

    Brasseur, P.; Verron, J. A.; Djath, B.; Duran, M.; Gaultier, L.; Gourdeau, L.; Melet, A.; Molines, J. M.; Ubelmann, C.

    2014-12-01

    The upcoming high-resolution SWOT altimetry satellite will provide an unprecedented description of the ocean dynamic topography for studying sub- and meso-scale processes in the ocean. But there is still much uncertainty on the signal that will be observed. There are many scientific questions that are unresolved about the observability of altimetry at vhigh resolution and on the dynamical role of the ocean meso- and submesoscales. In addition, SWOT data will raise specific problems due to the size of the data flows. These issues will probably impact the data assimilation approaches for future scientific or operational oceanography applications. In this work, we propose to use a high-resolution numerical model of the Western Pacific Solomon Sea as a regional laboratory to explore such observability and dynamical issues, as well as new data assimilation challenges raised by SWOT. The Solomon Sea connects subtropical water masses to the equatorial ones through the low latitude western boundary currents and could potentially modulate the tropical Pacific climate. In the South Western Pacific, the Solomon Sea exhibits very intense eddy kinetic energy levels, while relatively little is known about the mesoscale and submesoscale activities in this region. The complex bathymetry of the region, complicated by the presence of narrow straits and numerous islands, raises specific challenges. So far, a Solomon sea model configuration has been set up at 1/36° resolution. Numerical simulations have been performed to explore the meso- and submesoscales dynamics. The numerical solutions which have been validated against available in situ data, show the development of small scale features, eddies, fronts and filaments. Spectral analysis reveals a behavior that is consistent with the SQG theory. There is a clear evidence of energy cascade from the small scales including the submesoscales, although those submesoscales are only partially resolved by the model. In parallel, investigations have been conducted using image assimilation approaches in order to explore the richness of high-resolution altimetry missions. These investigations illustrate the potential benefit of combining tracer fields (SST, SSS and spiciness) with high-resolution SWOT data to estimate the fine-scale circulation.

  15. Foundations of a laser-accelerated plasma diagnostics and beam stabilization with miniaturized Rogowski coils

    NASA Astrophysics Data System (ADS)

    Gruenwald, J.; Kocoń, D.; Khikhlukha, D.

    2018-03-01

    In order to introduce spatially resolved measurements of the plasma density in a plasma accelerated by a laser, a novel concept is proposed in this work. We suggest the usage of an array of miniaturized Rogowski coils to measure the current contributions parallel to the laser beam with a spatial resolution in the sub-mm range. The principle of the experimental setup will be shown in 3-D CAD models. The coils are coaxial to the plasma channel (e.g. a hydrogen filled capillary, which is frequently used in laser-plasma acceleration experiments). This plasma diagnostics method is simple, robust and it is a passive measurement technique, which does not disturb the plasma itself. As such coils rely on a Biot-Savart inductivity, they allow to separate the contributions of the parallel from perpendicular currents (with respect to the laser beam). Rogowski coils do not have a ferromagnetic core. Hence, non-linear effects resulting from such a core are to be neglected, which increases the reliability of the obtained data. They also allow the diagnosis of transient signals that carry high currents (up to several hundred kA) on very short timescales. Within this paper some predictions about the time resolution of such coils will be presented along with simple theoretical considerations.

  16. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  17. A ring transducer system for medical ultrasound research.

    PubMed

    Waag, Robert C; Fedewa, Russell J

    2006-10-01

    An ultrasonic ring transducer system has been developed for experimental studies of scattering and imaging. The transducer consists of 2048 rectangular elements with a 2.5-MHz center frequency, a 67% -6 dB bandwidth, and a 0.23-mm pitch arranged in a 150-mm-diameter ring with a 25-mm elevation. At the center frequency, the element size is 0.30lambda x 42lambda and the pitch is 0.38lambda. The system has 128 parallel transmit channels, 16 parallel receive channels, a 2048:128 transmit multiplexer, a 2048:16 receive multiplexer, independently programmable transmit waveforms with 8-bit resolution, and receive amplifiers with time variable gain independently programmable over a 40-dB range. Receive signals are sampled at 20 MHz with 12-bit resolution. Arbitrary transmit and receive apertures can be synthesized. Calibration software minimizes system nonidealities caused by noncircularity of the ring and element-to-element response differences. Application software enables the system to be used by specification of high-level parameters in control files from which low-level hardware-dependent parameters are derived by specialized code. Use of the system is illustrated by producing focused and steered beams, synthesizing a spatially limited plane wave, measuring angular scattering, and forming b-scan images.

  18. Modeling of the energy resolution of a 1 meter and a 3 meter time of flight positron annihilation induced Auger electron spectrometers

    NASA Astrophysics Data System (ADS)

    Fairchild, A.; Chirayath, V.; Gladen, R.; McDonald, A.; Lim, Z.; Chrysler, M.; Koymen, A.; Weiss, A.

    Simion 8.1®simulations were used to determine the energy resolution of a 1 meter long Time of Flight Positron annihilation induced Auger Electron Spectrometer (TOF-PAES). The spectrometer consists of: 1. a magnetic gradient section used to parallelize the electrons leaving the sample along the beam axis, 2. an electric field free time of flight tube and 3. a detection section with a set of ExB plates that deflect electrons exiting the TOF tube into a Micro-Channel Plate (MCP). Simulations of the time of flight distribution of electrons emitted according to a known secondary electron emission distribution, for various sample biases, were compared to experimental energy calibration peaks and found to be in excellent agreement. The TOF spectra at the highest sample bias was used to determine the timing resolution function describing the timing spread due to the electronics. Simulations were then performed to calculate the energy resolution at various electron energies in order to deconvolute the combined influence of the magnetic field parallelizer, the timing resolution, and the voltage gradient at the ExB plates. The energy resolution of the 1m TOF-PAES was compared to a newly constructed 3 meter long system. The results were used to optimize the geometry and the potentials of the ExB plates for obtaining the best energy resolution. This work was supported by NSF Grant NSF Grant No. DMR 1508719 and DMR 1338130.

  19. Specification and Analysis of Parallel Machine Architecture

    DTIC Science & Technology

    1990-03-17

    Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of

  20. Note: Fully integrated time-to-amplitude converter in Si-Ge technology.

    PubMed

    Crotti, M; Rech, I; Ghioni, M

    2010-10-01

    Over the past years an always growing interest has arisen about the measurement technique of time-correlated single photon counting TCSPC), since it allows the analysis of extremely fast and weak light waveforms with a picoseconds resolution. Consequently, many applications exploiting TCSPC have been developed in several fields such as medicine and chemistry. Moreover, the development of multianode PMT and of single photon avalanche diode arrays led to the realization of acquisition systems with several parallel channels to employ the TCSPC technique in even more applications. Since TCSPC basically consists of the measurement of the arrival time of a photon, the most important part of an acquisition chain is the time measurement block, which must have high resolution and low differential nonlinearity, and in order to realize multidimensional systems, it has to be integrated to reduce both cost and area. In this paper we present a fully integrated time-to-amplitude converter, built in 0.35 μm Si-Ge technology, characterized by a good time resolution (60 ps), low differential nonlinearity (better than 3% peak to peak), high counting rate (16 MHz), low and constant power dissipation (40 mW), and low area occupation (1.38×1.28 mm(2)).

  1. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  2. Performance Improvements of the CYCOFOS Flow Model

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Hari; Moulitsas, Irene; Syrakos, Alexandros; Zodiatis, George; Nikolaides, Andreas; Hayes, Daniel; Georgiou, Georgios C.

    2013-04-01

    The CYCOFOS-Cyprus Coastal Ocean Forecasting and Observing System has been operational since early 2002, providing daily sea current, temperature, salinity and sea level forecasting data for the next 4 and 10 days to end-users in the Levantine Basin, necessary for operational application in marine safety, particularly concerning oil spills and floating objects predictions. CYCOFOS flow model, similar to most of the coastal and sub-regional operational hydrodynamic forecasting systems of the MONGOOS-Mediterranean Oceanographic Network for Global Ocean Observing System is based on the POM-Princeton Ocean Model. CYCOFOS is nested with the MyOcean Mediterranean regional forecasting data and with SKIRON and ECMWF for surface forcing. The increasing demand for higher and higher resolution data to meet coastal and offshore downstream applications motivated the parallelization of the CYCOFOS POM model. This development was carried out in the frame of the IPcycofos project, funded by the Cyprus Research Promotion Foundation. The parallel processing provides a viable solution to satisfy these demands without sacrificing accuracy or omitting any physical phenomena. Prior to IPcycofos project, there are been several attempts to parallelise the POM, as for example the MP-POM. The existing parallel code models rely on the use of specific outdated hardware architectures and associated software. The objective of the IPcycofos project is to produce an operational parallel version of the CYCOFOS POM code that can replicate the results of the serial version of the POM code used in CYCOFOS. The parallelization of the CYCOFOS POM model use Message Passing Interface-MPI, implemented on commodity computing clusters running open source software and not depending on any specialized vendor hardware. The parallel CYCOFOS POM code constructed in a modular fashion, allowing a fast re-locatable downscaled implementation. The MPI takes advantage of the Cartesian nature of the POM mesh, and use the built-in functionality of MPI routines to split the mesh, using a weighting scheme, along longitude and latitude among the processors. Each server processor work on the model based on domain decomposition techniques. The new parallel CYCOFOS POM code has been benchmarked against the serial POM version of CYCOFOS for speed, accuracy, and resolution and the results are more than satisfactory. With a higher resolution CYCOFOS Levantine model domain the forecasts need much less time than the serial CYCOFOS POM coarser version, both with identical accuracy.

  3. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  4. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  5. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.

  6. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.

  7. Metabarcoding for the parallel identification of several hundred predators and their prey: Application to bat species diet analysis.

    PubMed

    Galan, Maxime; Pons, Jean-Baptiste; Tournayre, Orianne; Pierre, Éric; Leuchtmann, Maxime; Pontier, Dominique; Charbonnel, Nathalie

    2018-05-01

    Assessing diet variability is of main importance to better understand the biology of bats and design conservation strategies. Although the advent of metabarcoding has facilitated such analyses, this approach does not come without challenges. Biases may occur throughout the whole experiment, from fieldwork to biostatistics, resulting in the detection of false negatives, false positives or low taxonomic resolution. We detail a rigorous metabarcoding approach based on a short COI minibarcode and two-step PCR protocol enabling the "all at once" taxonomic identification of bats and their arthropod prey for several hundreds of samples. Our study includes faecal pellets collected in France from 357 bats representing 16 species, as well as insect mock communities that mimic bat meals of known composition, negative and positive controls. All samples were analysed using three replicates. We compare the efficiency of DNA extraction methods, and we evaluate the effectiveness of our protocol using identification success, taxonomic resolution, sensitivity and amplification biases. Our parallel identification strategy of predators and prey reduces the risk of mis-assigning prey to wrong predators and decreases the number of molecular steps. Controls and replicates enable to filter the data and limit the risk of false positives, hence guaranteeing high confidence results for both prey occurrence and bat species identification. We validate 551 COI variants from arthropod including 18 orders, 117 family, 282 genus and 290 species. Our method therefore provides a rapid, resolutive and cost-effective screening tool for addressing evolutionary ecological issues or developing "chirosurveillance" and conservation strategies. © 2017 John Wiley & Sons Ltd.

  8. Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules

    PubMed Central

    2015-01-01

    The development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution is presented. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are oriented parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions. PMID:25363506

  9. Discovery of Grooves on Gaspra

    USGS Publications Warehouse

    Veverka, J.; Thomas, P.; Simonelli, D.; Belton, M.J.S.; Carr, M.; Chapman, C.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.; Klaasen, K.; Johnson, T.V.; Morrison, D.; Neukum, G.

    1994-01-01

    We report the discovery of grooves in Galileo high-resolution images of Gaspra. These features, previously seen only on Mars' satellite Phobos, are most likely related to severe impacts. Grooves on Gaspra occur as linear and pitted depressions, typically 100-200 m wide, 0.8 to 2.5 km long, and 10-20 m deep. Most occur in two major groups, one of which trends approximately parallel to the asteroid's long axis, but is offset by some 15??; the other is approximately perpendicular to this trend. The first of these directions falls along a family of planes which parallel three extensive flat facets identified by Thomas et al., Icarus 107. The occurrence of grooves on Gaspra is consistent with other indications (irregular shape, cratering record) that this asteroid has evolved through a violent collisional history. The bodywide congruence of major groove directions and other structural elements suggests that present-day Gaspra is a globally coherent body. ?? 1994 Academic Press. All rights reserved.

  10. Cis-regulatory changes in Kit ligand expression and parallel evolution of pigmentation in sticklebacks and humans

    PubMed Central

    Miller, Craig T.; Beleza, Sandra; Pollen, Alex A.; Schluter, Dolph; Kittles, Rick A.; Shriver, Mark D.; Kingsley, David M.

    2010-01-01

    SUMMARY Dramatic pigmentation changes have evolved within most vertebrate groups, including fish and humans. Here we use genetic crosses in sticklebacks to investigate the parallel origin of pigmentation changes in natural populations. High-resolution mapping and expression experiments show that light gills and light ventrums map to a divergent regulatory allele of the Kit ligand (Kitlg) gene. The divergent allele reduces expression in gill and skin tissue, and is shared by multiple derived freshwater populations with reduced pigmentation. In humans, Europeans and East Asians also share derived alleles at the KITLG locus. Strong signatures of selection map to regulatory regions surrounding the gene, and admixture mapping shows that the KITLG genomic region has a significant effect on human skin color. These experiments suggest that regulatory changes in Kitlg contribute to natural variation in vertebrate pigmentation, and that similar genetic mechanisms may underlie rapid evolutionary change in fish and humans. PMID:18083106

  11. NIRcam-NIRSpec GTO Observations of Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Rieke, Marcia J.; Ferruit, Pierre; Alberts, Stacey; Bunker, Andrew; Charlot, Stephane; Chevallard, Jacopo; Dressler, Alan; Egami, Eiichi; Eisenstein, Daniel; Endsley, Ryan; Franx, Marijn; Frye, Brenda L.; Hainline, Kevin; Jakobsen, Peter; Lake, Emma Curtis; Maiolino, Roberto; Rix, Hans-Walter; Robertson, Brant; Stark, Daniel; Williams, Christina; Willmer, Christopher; Willott, Chris J.

    2017-06-01

    The NIRSpec and and NIRCam GTO Teams are planning a joint imaging and spectroscopic study of the high redshift universe. By virtue of planning a joint program which includes medium and deep near- and mid-infrared imaging surveys and multi-object spectroscopy (MOS) of sources in the same fields, we have learned much about planning observing programs for each of the instruments and using them in parallel mode to maximize photon collection time. The design and rationale for our joint program will be explored in this talk with an emphasis on why we have chosen particular suites of filters and spectroscopic resolutions, why we have chosen particular exposure patterns, and how we have designed the parallel observations. The actual observations that we intend on executing will serve as examples of how to layout mosaics and MOS observations to maximize observing efficiency for surveys with JWST.

  12. Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules

    DOE PAGES

    Tran, Helen; Ronaldson, Kacey; Bailey, Nevette A.; ...

    2014-11-04

    We present the development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are orientedmore » parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions.« less

  13. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  14. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  15. Investigating a method of producing "red and dead" galaxies

    NASA Astrophysics Data System (ADS)

    Skory, Stephen

    2010-08-01

    In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles, but has memory bottlenecks as the datasets get larger. These bottlenecks inspired the second version, "Parallel HOP," which is a fully parallelized method and implementation of HOP that has worked on datasets with more than 20483 particles on hundreds of processing cores. Both methods are described in detail, as are the various effects of performance-related runtime options. Additionally, both halo finders are subjected to a full suite of performance benchmarks varying both dataset sizes and computational resources used. I conclude with descriptions of four new tools I added to yt. A Parallel Structure Function Generator allows analysis of two-point functions, such as correlation functions, using memory- and workload-parallelism. A Parallel Merger Tree Generator leverages the parallel halo finders in yt, such as Parallel HOP, to build the merger tree of halos in a cosmological simulation, and outputs the result to a SQLite database for simple and powerful data extraction. A Star Particle Analysis toolkit takes a group of star particles and can output the rate of formation as a function of time, and/or a synthetic Spectral Energy Distribution (S.E.D.) using the Bruzual and Charlot (2003) data tables. Finally, a Halo Mass Function toolkit takes as input a list of halo masses and can output the halo mass function for the halos, as well as an analytical fit for those halos using several previously published fits.

  16. High-resolution magnetic resonance angiography of the lower extremities with a dedicated 36-element matrix coil at 3 Tesla.

    PubMed

    Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O

    2007-06-01

    Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.

  17. ICON-MIC: Implementing a CPU/MIC Collaboration Parallel Framework for ICON on Tianhe-2 Supercomputer.

    PubMed

    Wang, Zihao; Chen, Yu; Zhang, Jingrong; Li, Lun; Wan, Xiaohua; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2018-03-01

    Electron tomography (ET) is an important technique for studying the three-dimensional structures of the biological ultrastructure. Recently, ET has reached sub-nanometer resolution for investigating the native and conformational dynamics of macromolecular complexes by combining with the sub-tomogram averaging approach. Due to the limited sampling angles, ET reconstruction typically suffers from the "missing wedge" problem. Using a validation procedure, iterative compressed-sensing optimized nonuniform fast Fourier transform (NUFFT) reconstruction (ICON) demonstrates its power in restoring validated missing information for a low-signal-to-noise ratio biological ET dataset. However, the huge computational demand has become a bottleneck for the application of ICON. In this work, we implemented a parallel acceleration technology ICON-many integrated core (MIC) on Xeon Phi cards to address the huge computational demand of ICON. During this step, we parallelize the element-wise matrix operations and use the efficient summation of a matrix to reduce the cost of matrix computation. We also developed parallel versions of NUFFT on MIC to achieve a high acceleration of ICON by using more efficient fast Fourier transform (FFT) calculation. We then proposed a hybrid task allocation strategy (two-level load balancing) to improve the overall performance of ICON-MIC by making full use of the idle resources on Tianhe-2 supercomputer. Experimental results using two different datasets show that ICON-MIC has high accuracy in biological specimens under different noise levels and a significant acceleration, up to 13.3 × , compared with the CPU version. Further, ICON-MIC has good scalability efficiency and overall performance on Tianhe-2 supercomputer.

  18. The microstructure and formation of duplex and black plessite in iron meteorites

    NASA Technical Reports Server (NTRS)

    Zhang, J.; Williams, D. B.; Goldstein, J. I.

    1993-01-01

    Two of the most common plessite structures, duplex and black plessite, in the taenite region of the Windmanstatten pattern of two iron meteorites (Grant and Carlton) are characterized using high-resolution electron microscopy and microanalysis techniques. Two types of gamma precipitates, found in the duplex plessite and black plessite regions, respectively, are identified, and their morphologies are described. The formation of the plessite structure is discussed using the information obtained in this study and results of a parallel investigation of decomposed martensitic Fe-Ni laboratory alloys.

  19. Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2

    NASA Technical Reports Server (NTRS)

    Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich

    2007-01-01

    This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.

  20. Problems in abundance determination from UV spectra of hot supergiants

    NASA Astrophysics Data System (ADS)

    Deković, M. Sarta; Kotnik-Karuza, D.; Jurkić, T.; Dominis Prester, D.

    2010-03-01

    We present measurements of equivalent widths of the UV, presumably photospheric lines: C III 1247 Å, N III 1748 Å, N III 1752 Å, N IV 1718 Å and He II 1640 Å in high-resolution IUE spectra of 24 galactic OB supergiants. Equivalent widths measured from the observed spectra have been compared with their counterparts in the Tlusty NLTE synthetic spectra. We discuss possibilities of static plan-parallel model to reproduce observed UV spectra of hot massive stars and possible reasons why observations differ from the model so much.

  1. Tubes of rhombohedral boron nitride

    NASA Astrophysics Data System (ADS)

    Bourgeois, L.; Bando, Y.; Sato, T.

    2000-08-01

    The structure of boron nitride bamboo-like tubular whiskers grown from boron nitride powder is investigated by high-resolution transmission electron microscopy. Despite the relatively small size of the tubes (20-200 nm in diameter), they all exhibit rhombohedral-like ordering in their layer stacking. The tubular sheets also tend to have their [10 bar 1 0] direction parallel to the fibre axis. Particles of iron alloys are commonly found encapsulated inside or at the end of the filaments. It is suggested that iron plays an active role in the growth of the fibres.

  2. A study of optical scattering methods in laboratory plasma diagnosis

    NASA Technical Reports Server (NTRS)

    Phipps, C. R., Jr.

    1972-01-01

    Electron velocity distributions are deduced along axes parallel and perpendicular to the magnetic field in a pulsed, linear Penning discharge in hydrogen by means of a laser Thomson scattering experiment. Results obtained are numerical averages of many individual measurements made at specific space-time points in the plasma evolution. Because of the high resolution in k-space and the relatively low maximum electron density 2 x 10 to the 13th power/cu cm, special techniques were required to obtain measurable scattering signals. These techniques are discussed and experimental results are presented.

  3. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  4. Hydrogen-assisted stable crack growth in iron-3 wt% silicon steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrow, T.J.; Prangnell, P.; Aindow, M.

    1996-08-01

    Observations of internal hydrogen cleavage in Fe-3Si are reported. Hydrogen-assisted stable crack growth (H-SCG) is associated with cleavage striations of a 300 nm spacing, observed using scanning electron microscopy (SEM) and atomic force microscopy (AFM). High resolution SEM revealed finer striations, previously undetected, with a spacing of approximately 30 nm. These were parallel to the coarser striations. Scanning tunneling microscopy (STM) also showed the fine striation spacing, and gave a striation height of approximately 15 nm. The crack front was not parallel to the striations. Transmission electron microscopy (TEM) of crack tip plastic zones showed {l_brace}112{r_brace} and {l_brace}110{r_brace} slip, withmore » a high dislocation density (around 10{sup 14}m{sup {minus}2}). The slip plane spacing was approximately 15--30 nm. Parallel arrays of high dislocation density were observed in the wake of the hydrogen cleavage crack. It is concluded that H-ScG in Fe-3Si occurs by periodic brittle cleavage on the {l_brace}001{r_brace} planes. This is preceded by dislocation emission. The coarse striations are produced by crack tip blunting and the fine striations by dislocations attracted by image forces to the fracture surface after cleavage. The effects of temperature, pressure and yield strength on the kinetics of H-SCG can be predicted using a model for diffusion of hydrogen through the plastic zone.« less

  5. New machining method of high precision infrared window part

    NASA Astrophysics Data System (ADS)

    Yang, Haicheng; Su, Ying; Xu, Zengqi; Guo, Rui; Li, Wenting; Zhang, Feng; Liu, Xuanmin

    2016-10-01

    Most of the spherical shell of the photoelectric multifunctional instrument was designed as multi optical channel mode to adapt to the different band of the sensor, there were mainly TV, laser and infrared channels. Without affecting the optical diameter, wind resistance and pneumatic performance of the optical system, the overall layout of the spherical shell was optimized to save space and reduce weight. Most of the shape of the optical windows were special-shaped, each optical window directly participated in the high resolution imaging of the corresponding sensor system, and the optical axis parallelism of each sensor needed to meet the accuracy requirement of 0.05mrad.Therefore precision machining of optical window parts quality will directly affect the photoelectric system's pointing accuracy and interchangeability. Processing and testing of the TV and laser window had been very mature, while because of the special nature of the material, transparent and high refractive rate, infrared window parts had the problems of imaging quality and the control of the minimum focal length and second level parallel in the processing. Based on years of practical experience, this paper was focused on how to control the shape and parallel difference precision of infrared window parts in the processing. Single pass rate was increased from 40% to more than 95%, the processing efficiency was significantly enhanced, an effective solution to the bottleneck problem in the actual processing, which effectively solve the bottlenecks in research and production.

  6. High-resolution characterization of a hepatocellular carcinoma genome.

    PubMed

    Totoki, Yasushi; Tatsuno, Kenji; Yamamoto, Shogo; Arai, Yasuhito; Hosoda, Fumie; Ishikawa, Shumpei; Tsutsumi, Shuichi; Sonoda, Kohtaro; Totsuka, Hirohiko; Shirakihara, Takuya; Sakamoto, Hiromi; Wang, Linghua; Ojima, Hidenori; Shimada, Kazuaki; Kosuge, Tomoo; Okusaka, Takuji; Kato, Kazuto; Kusuda, Jun; Yoshida, Teruhiko; Aburatani, Hiroyuki; Shibata, Tatsuhiro

    2011-05-01

    Hepatocellular carcinoma, one of the most common virus-associated cancers, is the third most frequent cause of cancer-related death worldwide. By massively parallel sequencing of a primary hepatitis C virus-positive hepatocellular carcinoma (36× coverage) and matched lymphocytes (>28× coverage) from the same individual, we identified more than 11,000 somatic substitutions of the tumor genome that showed predominance of T>C/A>G transition and a decrease of the T>C substitution on the transcribed strand, suggesting preferential DNA repair. Gene annotation enrichment analysis of 63 validated non-synonymous substitutions revealed enrichment of phosphoproteins. We further validated 22 chromosomal rearrangements, generating four fusion transcripts that had altered transcriptional regulation (BCORL1-ELF4) or promoter activity. Whole-exome sequencing at a higher sequence depth (>76× coverage) revealed a TSC1 nonsense substitution in a subpopulation of the tumor cells. This first high-resolution characterization of a virus-associated cancer genome identified previously uncharacterized mutation patterns, intra-chromosomal rearrangements and fusion genes, as well as genetic heterogeneity within the tumor.

  7. Experimental evidence and structural modeling of nonstoichiometric (010) surfaces coexisting in hydroxyapatite nano-crystals.

    PubMed

    Ospina, C A; Terra, J; Ramirez, A J; Farina, M; Ellis, D E; Rossi, A M

    2012-01-01

    High-resolution transmission electron microscopy (HRTEM) and ab initio quantum-mechanical calculations of electronic structure were combined to investigate the structure of the hydroxyapatite (HA) (010) surface, which plays an important role in HA interactions with biological media. HA was synthesized by in vitro precipitation at 37°C. HRTEM images revealed thin elongated rod nanoparticles with preferential growth along the [001] direction and terminations parallel to the (010) plane. The focal series reconstruction (FSR) technique was applied to develop an atomic-scale structural model of the high-resolution images. The HRTEM simulations identified the coexistence of two structurally distinct terminations for (010) surfaces: a rather flat Ca(II)-terminated surface and a zig-zag structure with open OH channels. Density functional theory (DFT) was applied in a periodic slab plane-wave pseudopotential approach to refine details of atomic coordination and bond lengths of Ca(I) and Ca(II) sites in hydrated HA (010) surfaces, starting from the HRTEM model. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. High dynamic range bio-molecular ion microscopy with the Timepix detector.

    PubMed

    Jungmann, Julia H; MacAleese, Luke; Visser, Jan; Vrakking, Marc J J; Heeren, Ron M A

    2011-10-15

    Highly parallel, active pixel detectors enable novel detection capabilities for large biomolecules in time-of-flight (TOF) based mass spectrometry imaging (MSI). In this work, a 512 × 512 pixel, bare Timepix assembly combined with chevron microchannel plates (MCP) captures time-resolved images of several m/z species in a single measurement. Mass-resolved ion images from Timepix measurements of peptide and protein standards demonstrate the capability to return both mass-spectral and localization information of biologically relevant analytes from matrix-assisted laser desorption ionization (MALDI) on a commercial ion microscope. The use of a MCP-Timepix assembly delivers an increased dynamic range of several orders of magnitude. The Timepix returns defined mass spectra already at subsaturation MCP gains, which prolongs the MCP lifetime and allows the gain to be optimized for image quality. The Timepix peak resolution is only limited by the resolution of the in-pixel measurement clock. Oligomers of the protein ubiquitin were measured up to 78 kDa. © 2011 American Chemical Society

  9. High-resolution onshore-offshore morpho-bathymetric records of modern chalk and granitic shore platforms in NW France

    NASA Astrophysics Data System (ADS)

    Duperret, Anne; Raimbault, Céline; Le Gall, Bernard; Authemayou, Christine; van Vliet-Lanoë, Brigitte; Regard, Vincent; Dromelet, Elsa; Vandycke, Sara

    2016-07-01

    Modern shore platforms developed on rocky coasts are key areas for understanding coastal erosion processes during the Holocene. This contribution offers a detailed picture of two contrasted shore-platform systems, based on new high-resolution shallow-water bathymetry, further coupled with aerial LiDAR topography. Merged land-sea digital elevation models were achieved on two distinct types of rocky coasts along the eastern English Channel in France (Picardy and Upper-Normandy: PUN) and in a NE Atlantic area (SW Brittany: SWB) in NW France. About the PUN case, submarine steps, identified as paleo-shorelines, parallel the actual coastline. Coastal erosive processes appear to be continuous and regular through time, since mid-Holocene at least. In SWB, there is a discrepancy between contemporary coastline orientation and a continuous step extending from inland to offshore, identified as a paleo-shoreline. This illustrates a polyphased and inherited shore platform edification, mainly controlled by tectonic processes.

  10. A simple dual online ultra-high pressure liquid chromatography system (sDO-UHPLC) for high throughput proteome analysis.

    PubMed

    Lee, Hangyeore; Mun, Dong-Gi; Bae, Jingi; Kim, Hokeun; Oh, Se Yeon; Park, Young Soo; Lee, Jae-Hyuk; Lee, Sang-Won

    2015-08-21

    We report a new and simple design of a fully automated dual-online ultra-high pressure liquid chromatography system. The system employs only two nano-volume switching valves (a two-position four port valve and a two-position ten port valve) that direct solvent flows from two binary nano-pumps for parallel operation of two analytical columns and two solid phase extraction (SPE) columns. Despite the simple design, the sDO-UHPLC offers many advantageous features that include high duty cycle, back flushing sample injection for fast and narrow zone sample injection, online desalting, high separation resolution and high intra/inter-column reproducibility. This system was applied to analyze proteome samples not only in high throughput deep proteome profiling experiments but also in high throughput MRM experiments.

  11. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  12. Design and characterization of the ePix10k: a high dynamic range integrating pixel ASIC for LCLS detectors

    NASA Astrophysics Data System (ADS)

    Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.

    2015-05-01

    ePix10k is a variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. The ASIC is optimized for high dynamic range application requiring high spatial resolution and fast frame rates. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix10k variant has 100um×100um pixels arranged in a 176×192 matrix, a resolution of 140e- r.m.s. and a signal range of 3.5pC (10k photons at 8keV). In its final version it will be able to sustain a frame rate of 2kHz. A first prototype has been fabricated and characterized. Performance in terms of noise, linearity, uniformity, cross-talk, together with preliminary measurements with bump bonded sensors are reported here.

  13. Co-Registered In Situ Secondary Electron and Mass Spectral Imaging on the Helium Ion Microscope Demonstrated Using Lithium Titanate and Magnesium Oxide Nanoparticles.

    PubMed

    Dowsett, D; Wirtz, T

    2017-09-05

    The development of a high resolution elemental imaging platform combining coregistered secondary ion mass spectrometry and high resolution secondary electron imaging is reported. The basic instrument setup and operation are discussed and in situ image correlation is demonstrated on a lithium titanate and magnesium oxide nanoparticle mixture. The instrument uses both helium and neon ion beams generated by a gas field ion source to irradiate the sample. Both secondary electrons and secondary ions may be detected. Secondary ion mass spectrometry (SIMS) is performed using an in-house developed double focusing magnetic sector spectrometer with parallel detection. Spatial resolutions of 10 nm have been obtained in SIMS mode. Both the secondary electron and SIMS image data are very surface sensitive and have approximately the same information depth. While the spatial resolutions are approximately a factor of 10 different, switching between the different images modes may be done in situ and extremely rapidly, allowing for simple imaging of the same region of interest and excellent coregistration of data sets. The ability to correlate mass spectral images on the 10 nm scale with secondary electron images on the nanometer scale in situ has the potential to provide a step change in our understanding of nanoscale phenomena in fields from materials science to life science.

  14. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  15. Exploring the Ability of a Coarse-grained Potential to Describe the Stress-strain Response of Glassy Polystyrene

    DTIC Science & Technology

    2012-10-01

    using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS

  16. Whole-brain high in-plane resolution fMRI using accelerated EPIK for enhanced characterisation of functional areas at 3T

    PubMed Central

    Yun, Seong Dae

    2017-01-01

    The relatively high imaging speed of EPI has led to its widespread use in dynamic MRI studies such as functional MRI. An approach to improve the performance of EPI, EPI with Keyhole (EPIK), has been previously presented and its use in fMRI was verified at 1.5T as well as 3T. The method has been proven to achieve a higher temporal resolution and smaller image distortions when compared to single-shot EPI. Furthermore, the performance of EPIK in the detection of functional signals was shown to be comparable to that of EPI. For these reasons, we were motivated to employ EPIK here for high-resolution imaging. The method was optimised to offer the highest possible in-plane resolution and slice coverage under the given imaging constraints: fixed TR/TE, FOV and acceleration factors for parallel imaging and partial Fourier techniques. The performance of EPIK was evaluated in direct comparison to the optimised protocol obtained from EPI. The two imaging methods were applied to visual fMRI experiments involving sixteen subjects. The results showed that enhanced spatial resolution with a whole-brain coverage was achieved by EPIK (1.00 mm × 1.00 mm; 32 slices) when compared to EPI (1.25 mm × 1.25 mm; 28 slices). As a consequence, enhanced characterisation of functional areas has been demonstrated in EPIK particularly for relatively small brain regions such as the lateral geniculate nucleus (LGN) and superior colliculus (SC); overall, a significantly increased t-value and activation area were observed from EPIK data. Lastly, the use of EPIK for fMRI was validated with the simulation of different types of data reconstruction methods. PMID:28945780

  17. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    NASA Astrophysics Data System (ADS)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  18. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    PubMed

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  19. Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2012-01-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804

  20. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    NASA Astrophysics Data System (ADS)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  1. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  2. Human settlement history between Sunda and Sahul: a focus on East Timor (Timor-Leste) and the Pleistocenic mtDNA diversity.

    PubMed

    Gomes, Sibylle M; Bodner, Martin; Souto, Luis; Zimmermann, Bettina; Huber, Gabriela; Strobl, Christina; Röck, Alexander W; Achilli, Alessandro; Olivieri, Anna; Torroni, Antonio; Côrte-Real, Francisco; Parson, Walther

    2015-02-14

    Distinct, partly competing, "waves" have been proposed to explain human migration in(to) today's Island Southeast Asia and Australia based on genetic (and other) evidence. The paucity of high quality and high resolution data has impeded insights so far. In this study, one of the first in a forensic environment, we used the Ion Torrent Personal Genome Machine (PGM) for generating complete mitogenome sequences via stand-alone massively parallel sequencing and describe a standard data validation practice. In this first representative investigation on the mitochondrial DNA (mtDNA) variation of East Timor (Timor-Leste) population including >300 individuals, we put special emphasis on the reconstruction of the initial settlement, in particular on the previously poorly resolved haplogroup P1, an indigenous lineage of the Southwest Pacific region. Our results suggest a colonization of southern Sahul (Australia) >37 kya, limited subsequent exchange, and a parallel incubation of initial settlers in northern Sahul (New Guinea) followed by westward migrations <28 kya. The temporal proximity and possible coincidence of these latter dispersals, which encompassed autochthonous haplogroups, with the postulated "later" events of (South) East Asian origin pinpoints a highly dynamic migratory phase.

  3. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    PubMed

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. Self-Assembled Structures of Benzoic Acid on Au(111) Surface

    NASA Astrophysics Data System (ADS)

    Vu, Thu-Hien; Wandlowski, Thomas

    2017-06-01

    Electrochemical scanning tunneling microscopy combined with cyclic voltammetry were employed to explore the self-assembly of benzoic acid (BA) on a Au(111) substrate surface in a 0.1-M HClO4 solution. At the negatively charged surface, BA molecules form two highly ordered physisorbed adlayers with their phenyl rings parallel to the substrate surface. High-resolution scanning tunneling microscopy images reveal the packing arrangement and internal molecular structures. The striped pattern and zigzag structure of the BA adlayers are composed of parallel rows of dimers, in which two BA molecules are bound through a pair of O-H···O hydrogen bonds. Increasing the electrode potential further to positive charge densities of Au(111) leads to the desorption of the physisorbed hydrogen-bonded networks and the formation of a chemisorbed adlayer. BA molecules change their orientation from planar to upright fashion, which is accompanied by the deprotonation of the carboxyl group. Furthermore, potential-induced formation and dissolution of BA adlayers were also investigated. Structural transitions between the various types of ordered adlayers occur according to a nucleation and growth mechanism.

  5. Simultaneous orthogonal plane imaging.

    PubMed

    Mickevicius, Nikolai J; Paulson, Eric S

    2017-11-01

    Intrafraction motion can result in a smearing of planned external beam radiation therapy dose distributions, resulting in an uncertainty in dose actually deposited in tissue. The purpose of this paper is to present a pulse sequence that is capable of imaging a moving target at a high frame rate in two orthogonal planes simultaneously for MR-guided radiotherapy. By balancing the zero gradient moment on all axes, slices in two orthogonal planes may be spatially encoded simultaneously. The orthogonal slice groups may be acquired with equal or nonequal echo times. A Cartesian spoiled gradient echo simultaneous orthogonal plane imaging (SOPI) sequence was tested in phantom and in vivo. Multiplexed SOPI acquisitions were performed in which two parallel slices were imaged along two orthogonal axes simultaneously. An autocalibrating phase-constrained 2D-SENSE-GRAPPA (generalized autocalibrating partially parallel acquisition) algorithm was implemented to reconstruct the multiplexed data. SOPI images without intraslice motion artifacts were reconstructed at a maximum frame rate of 8.16 Hz. The 2D-SENSE-GRAPPA reconstruction separated the parallel slices aliased along each orthogonal axis. The high spatiotemporal resolution provided by SOPI has the potential to be beneficial for intrafraction motion management during MR-guided radiation therapy or other MRI-guided interventions. Magn Reson Med 78:1700-1710, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  6. The pattern of parallel edge plasma flows due to pressure gradients, recycling, and resonant magnetic perturbations in DIII-D

    DOE PAGES

    Frerichs, H.; Schmitz, Oliver; Evans, Todd; ...

    2015-07-13

    High resolution plasma transport simulations with the EMC3-EIRENE code have been performed to address the parallel plasma flow structure in the boundary of a poloidal divertor configuration with non-axisymmetric perturbations at DIII-D. Simulation results show that a checkerboard pattern of flows with alternating direction is generated inside the separatrix. This pattern is aligned with the position of the main resonances (i.e. where the safety factor is equal to rational values q = m/n for a perturbation field with base mode number n): m pairs of alternating forward and backward flow channel exist for each resonance. The poloidal oscillations are alignedmore » with the subharmonic Melnikov function, which indicates that the plasma flow is generated by parallel pressure gradients along perturbed field lines. Lastly, an additional scrape-off layer-like domain is introduced by the perturbed separatrix which guides field lines from the interior to the divertor targets, resulting in an enhanced outward flow that is consistent with the experimentally observed particle pump-out effect. However, while the lobe structure of the perturbed separatrix is very well reflected in the temperature profile, the same lobes can appear to be smaller in the flow profile due to a competition between high upstream pressure and downstream particle sources driving flows in opposite directions.« less

  7. Status of Beam Line Detectors for the BigRIPS Fragment Separator at RIKEN RI Beam Factory: Issues on High Rates and Resolution

    NASA Astrophysics Data System (ADS)

    Sato, Yuki; Fukuda, Naoki; Takeda, Hiroyuki; Kameda, Daisuke; Suzuki, Hiroshi; Shimizu, Yohei; Ahn, DeukSoon; Murai, Daichi; Inabe, Naohito; Shimaoka, Takehiro; Tsubota, Masakatsu; Kaneko, Junichi H.; Chayahara, Akiyoshi; Umezawa, Hitoshi; Shikata, Shinichi; Kumagai, Hidekazu; Murakami, Hiroyuki; Sato, Hiromi; Yoshida, Koichi; Kubo, Toshiyuki

    A multiple sampling ionization chamber (MUSIC) and parallel-plate avalanche counters (PPACs) were installed within the superconducting in-flight separator, named BigRIPS, at the RIKEN Nishina Center for particle identification of RI beams. The MUSIC detector showed negligible charge collection inefficiency from recombination of electrons and ions, up to a 99-kcps incidence rate for high-energy heavy ions. For the PPAC detectors, the electrical discharge durability for incident heavy ions was improved by changing the electrode material. Finally, we designed a single crystal diamond detector, which is under development for TOF measurements of high-energy heavy ions, that has a very fast response time (pulse width <1 ns).

  8. Construction and Design of a full size sTGC prototype for the ATLAS New Small Wheel upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    For the forthcoming Phase-I upgrade to the LHC (2018/19), the first station of the ATLAS muon end-cap system, Small Wheel, will need to be replaced. The New Small Wheel (NSW) will have to operate in a high background radiation region while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. In particular, the precision reconstruction of tracks requires a spatial resolution of about 100 μm, and the Level-1 trigger track segments have to be reconstructed with an angular resolution of approximately 1 mrad. The NSW will have two chamber technologies, one primarily devoted tomore » the Level-1 trigger function the small-strip Thin Gap Chambers (sTGC) and one dedicated to precision tracking, Micromegas detectors, (MM). The single sTGC planes of a quadruplet consists of an anode layer of 50 μm gold plated tungsten wire sandwiched between two resistive cathode layers. Behind one of the resistive cathode layers, a PCB with precise machined strips (thus the name sTGC's) spaced every 3.2 mm allows to achieve the position resolution that ranges from 70 to 150 μm, depending on the incident particle angle. Behind the second cathode, a PCB that contains an arrangement of pads, allows for a fast coincidence between successive sTGC layers to tag the passage of a track and reads only the corresponding strips for triggering. To be able to profit from the high accuracy of each of the sTGC planes for trigger purposes, their relative geometrical position between planes has to be controlled to within a precision of about 40 μm in their parallelism, as well (due to the various incident angles), to within a precision of 80 μm in the relative distance between the planes to achieve the overall angular resolution of 1 mrad. The needed accuracy in the position and parallelism of the strips is achieved by machining brass inserts together when machining the strip patterns into the cathode boards in a single step. The inserts can then be used as external references on a granite table. Precision methods are used to maintain high accuracy when combining four single detector gaps first into two doublets and then into a quadruplet. We will present results on the ongoing construction of full size (∼1 x 1 m) sTGC quadruplet prototypes before full construction starts in 2015. (authors)« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Katherine L.; Judenhofer, Martin S.; Cherry, Simon R.

    In preclinical single-photon emission computed tomography (SPECT) system development the primary objective has been to improve spatial resolution by using novel parallel-hole or multi-pinhole collimator geometries. Furthermore, such high-resolution systems have relatively poor sensitivity (typically 0.01% to 0.1%). In contrast, a system that does not use collimators can achieve very high-sensitivity. Here we present a high-sensitivity un-collimated detector single-photon imaging (UCD-SPI) system for the imaging of both small animals and plants. This scanner consists of two thin, closely spaced, pixelated scintillator detectors that use NaI(Tl), CsI(Na), or BGO. The performance of the system has been characterized by measuring sensitivity, spatialmore » resolution, linearity, detection limits, and uniformity. With 99mTc (140 keV) at the center of the field of view (20 mm scintillator separation), the sensitivity was measured to be 31.8% using the NaI(Tl) detectors and 40.2% with CsI(Na). The best spatial resolution (FWHM when the image formed as the geometric mean of the two detector heads, 20 mm scintillator separation) was 19.0 mm for NaI(Tl) and 11.9 mm for CsI(Na) at 140 keV, and 19.5 mm for BGO at 1116 keV, which is somewhat degraded compared to the cm-scale resolution obtained with only one detector head and a close source. The quantitative accuracy of the system’s linearity is better than 2% with detection down to activity levels of 100 nCi. Two in vivo animal studies (a renal scan using 99mTc MAG-3 and a thyroid scan with 123I) and one plant study (a 99mTcO 4- xylem transport study) highlight the unique capabilities of this UCD-SPI system. From the renal scan, we observe approximately a one thousand-fold increase in sensitivity compared to the Siemens Inveon SPECT/CT scanner. In conclusion, UCD-SPI is useful for many imaging tasks that do not require excellent spatial resolution, such as high-throughput screening applications, simple radiotracer uptake studies in tumor xenografts, dynamic studies where very good temporal resolution is critical, or in planta imaging of radioisotopes at low concentrations.« less

  10. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  11. Synthesis and Evaluation of A High Precision 3D-Printed Ti6Al4V Compliant Parallel Manipulator

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tuan; Teo, Tat Joo; Huat Yeo, Song; Wang, Pan; Nai, Mui Ling Sharon

    2017-12-01

    A novel 3D printed compliant parallel manipulator (CPM) with θX - θX - Z motions is presented in this paper. This CPM is synthesized using the beam-based method, a new structural optimization approach, to achieve optimized stiffness properties with targeted dynamic behavior. The CPM performs high non-actuating stiffness based on the predicted stiffness ratios of about 3600 for translations and 570 for rotations, while the dynamic response is fast with the targeted first resonant mode of 100Hz. A prototype of the synthesized CPM is fabricated using the electron beam melting (EBM) technology with Ti6Al4V material. Driven by three voice-coil (VC) motors, the CPM demonstrated a positioning resolution of 50nm along the Z axis and an angular resolution of ~0.3 “about the X and Y axes, the positioning accuracy is also good with the measured values of ±25.2nm and ±0.17” for the translation and rotations respectively. Experimental investigation also shows that this large workspace CPM has a first resonant mode of 98Hz and the stiffness behavior matches the prediction with the highest deviation of 11.2%. Most importantly, the full workspace of 10° × 10° × 7mm of the proposed CPM can be achieved, that demonstrates 3D printed compliant mechanisms can perform large elastic deformation. The obtained results show that CPMs printed by EBM technology have predictable mechanical characteristics and are applicable in precise positioning systems.

  12. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong

    2018-02-01

    Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.

  13. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  14. High-speed three-dimensional measurements with a fringe projection-based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther

    2014-11-01

    An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.

  15. Parallel hyperspectral compressive sensing method on GPU

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.

    2015-10-01

    Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

  16. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  17. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  18. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  19. Three-dimensional anisotropy contrast periodically rotated overlapping parallel lines with enhanced reconstruction (3DAC PROPELLER) on a 3.0T system: a new modality for routine clinical neuroimaging.

    PubMed

    Nakada, Tsutomu; Matsuzawa, Hitoshi; Fujii, Yukihiko; Takahashi, Hitoshi; Nishizawa, Masatoyo; Kwee, Ingrid L

    2006-07-01

    Clinical magnetic resonance imaging (MRI) has recently entered the "high-field" era, and systems equipped with 3.0-4.0T superconductive magnets are becoming the gold standard for diagnostic imaging. While higher signal-to-noise ratio (S/N) is a definite advantage of higher field systems, higher susceptibility effect remains to be a significant trade-off. To take advantage of a higher field system in performing routine clinical images of higher anatomical resolution, we implemented a vector contrast image technique to 3.0T imaging, three-dimensional anisotropy contrast (3DAC), with a PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) sequence, a method capable of effectively eliminating undesired artifacts on rapid diffusion imaging sequences. One hundred subjects (20 normal volunteers and 80 volunteers with various central nervous system diseases) participated in the study. Anisotropic diffusion-weighted PROPELLER images were obtained on a General Electric (Waukesha, WI, USA) Signa 3.0T for each axis, with b-value of 1100 sec/mm(2). Subsequently, 3DAC images were constructed using in-house software written on MATLAB (MathWorks, Natick, MA, USA). The vector contrast allows for providing exquisite anatomical detail illustrated by clear identification of all major tracts through the entire brain. 3DAC images provide better anatomical resolution for brainstem glioma than higher-resolution T2 reversed images. Degenerative processes of disease-specific tracts were clearly identified as illustrated in cases of multiple system atrophy and Joseph-Machado disease. Anatomical images of significantly higher resolution than the best current standard, T2 reversed images, were successfully obtained. As a technique readily applicable under routine clinical setting, 3DAC PROPELLER on a 3.0T system will be a powerful addition to diagnostic imaging.

  20. Status of astigmatism-corrected Czerny-Turner spectrometers

    NASA Astrophysics Data System (ADS)

    Li, Xinhang; Dong, Keyan; An, Yan; Wang, Zhenye

    2016-10-01

    In order to analysis and design the Czerny-Turner structure spectrometer with the high resolution and high energy reception, various astigmatism methods of the Czerny-Turner structure are reported. According to the location of plane grating, the astigmatism correction methods are divided into two categories, one is the plane grating in divergent illumination, another is the plane grating in parallel illumination. Basing on the different methods, the anastigmatic principle and methods are analyzed, the merits and demerits of the above methods are summarized and evaluated. The theoretical foundation for design of broadband eliminating astigmatism Czerny-Turner spectrometer and the reference value for the further design work are laid by the summary and analyzing in this paper.

  1. A new chemical tool for absinthe producers, quantification of α/β-thujone and the bitter components in Artemisia absinthium.

    PubMed

    Bach, Benoit; Cleroux, Marilyn; Saillen, Mayra; Schönenberger, Patrik; Burgos, Stephane; Ducruet, Julien; Vallat, Armelle

    2016-12-15

    The concentrations of α/β-thujone and the bitter components of Artemisia absinthium were quantified from alcoholic wormwood extracts during four phenological stages of their harvest period. A solid-phase micro-extraction method coupled to gas chromatography-mass spectrometry was used to determine the concentration of the two isomeric forms of thujone. In parallel, the combination of ultra-high pressure liquid chromatography and high resolution mass spectrometry allowed to quantify the compounds absinthin, artemisetin and dihydro-epi-deoxyarteannuin B. This present study aimed at helping absinthe producers to determine the best harvesting period. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Helium ion microscopy and ultra-high-resolution scanning electron microscopy analysis of membrane-extracted cells reveals novel characteristics of the cytoskeleton of Giardia intestinalis.

    PubMed

    Gadelha, Ana Paula Rocha; Benchimol, Marlene; de Souza, Wanderley

    2015-06-01

    Giardia intestinalis presents a complex microtubular cytoskeleton formed by specialized structures, such as the adhesive disk, four pairs of flagella, the funis and the median body. The ultrastructural organization of the Giardia cytoskeleton has been analyzed using different microscopic techniques, including high-resolution scanning electron microscopy. Recent advances in scanning microscopy technology have opened a new venue for the characterization of cellular structures and include scanning probe microscopy techniques such as ultra-high-resolution scanning electron microscopy (UHRSEM) and helium ion microscopy (HIM). Here, we studied the organization of the cytoskeleton of G. intestinalis trophozoites using UHRSEM and HIM in membrane-extracted cells. The results revealed a number of new cytoskeletal elements associated with the lateral crest and the dorsal surface of the parasite. The fine structure of the banded collar was also observed. The marginal plates were seen linked to a network of filaments, which were continuous with filaments parallel to the main cell axis. Cytoplasmic filaments that supported the internal structures were seen by the first time. Using anti-actin antibody, we observed a labeling in these filamentous structures. Taken together, these data revealed new surface characteristics of the cytoskeleton of G. intestinalis and may contribute to an improved understanding of the structural organization of trophozoites. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Effects of electrode size and spacing on sensory modalities in the phantom thumb perception area for the forearm amputees.

    PubMed

    Li, P; Chai, G H; Zhu, K H; Lan, N; Sui, X H

    2015-01-01

    Tactile sensory feedback plays a key role in accomplishing the dexterous manipulation of prosthetic hands for the amputees, and the non-invasive transcutaneous electrical nerve stimulation (TENS) of the phantom finger perception (PFP) area would be an effective way to realize sensory feedback clinically. In order to realize the high-spatial-resolution tactile sensory feedback in the PFP region, we investigated the effects of electrode size and spacing on the tactile sensations for potentially optimizing the surface electrode array configuration. Six forearm-amputated subjects were recruited in the psychophysical studies. With the diameter of the circular electrode increasing from 3 mm to 12 mm, the threshold current intensity was enhanced correspondingly under different sensory modalities. The smaller electrode could potentially lead to high sensation spatial resolution. Whereas, the smaller the electrode, the less the number of sensory modalities. For an Φ-3 mm electrode, it is even hard for the subject to perceive any perception modalities under normal stimulating current. In addition, the two-electrode discrimination distance (TEDD) in the phantom thumb perception area decreased with electrode size decreasing in two directions of parallel or perpendicular to the forearm. No significant difference of TEDD existed along the two directions. Studies in this paper would guide the configuration optimization of the TENS electrode array for potential high spatial-resolution sensory feedback.

  4. Mars-solar wind interaction: LatHyS, an improved parallel 3-D multispecies hybrid model

    NASA Astrophysics Data System (ADS)

    Modolo, Ronan; Hess, Sebastien; Mancini, Marco; Leblanc, Francois; Chaufray, Jean-Yves; Brain, David; Leclercq, Ludivine; Esteban-Hernández, Rosa; Chanteur, Gerard; Weill, Philippe; González-Galindo, Francisco; Forget, Francois; Yagi, Manabu; Mazelle, Christian

    2016-07-01

    In order to better represent Mars-solar wind interaction, we present an unprecedented model achieving spatial resolution down to 50 km, a so far unexplored resolution for global kinetic models of the Martian ionized environment. Such resolution approaches the ionospheric plasma scale height. In practice, the model is derived from a first version described in Modolo et al. (2005). An important effort of parallelization has been conducted and is presented here. A better description of the ionosphere was also implemented including ionospheric chemistry, electrical conductivities, and a drag force modeling the ion-neutral collisions in the ionosphere. This new version of the code, named LatHyS (Latmos Hybrid Simulation), is here used to characterize the impact of various spatial resolutions on simulation results. In addition, and following a global model challenge effort, we present the results of simulation run for three cases which allow addressing the effect of the suprathermal corona and of the solar EUV activity on the magnetospheric plasma boundaries and on the global escape. Simulation results showed that global patterns are relatively similar for the different spatial resolution runs, but finest grid runs provide a better representation of the ionosphere and display more details of the planetary plasma dynamic. Simulation results suggest that a significant fraction of escaping O+ ions is originated from below 1200 km altitude.

  5. Pushing the limits of high-resolution functional MRI using a simple high-density multi-element coil design.

    PubMed

    Petridou, N; Italiaander, M; van de Bank, B L; Siero, J C W; Luijten, P R; Klomp, D W J

    2013-01-01

    Recent studies have shown that functional MRI (fMRI) can be sensitive to the laminar and columnar organization of the cortex based on differences in the spatial and temporal characteristics of the blood oxygenation level-dependent (BOLD) signal originating from the macrovasculature and the neuronal-specific microvasculature. Human fMRI studies at this scale of the cortical architecture, however, are very rare because the high spatial/temporal resolution required to explore these properties of the BOLD signal are limited by the signal-to-noise ratio. Here, we show that it is possible to detect BOLD signal changes at an isotropic spatial resolution as high as 0.55 mm at 7 T using a high-density multi-element surface coil with minimal electronics, which allows close proximity to the head. The coil comprises of very small, 1 × 2-cm(2) , elements arranged in four flexible modules of four elements each (16-channel) that can be positioned within 1 mm from the head. As a result of this proximity, tissue losses were five-fold greater than coil losses and sufficient to exclude preamplifier decoupling. When compared with a standard 16-channel head coil, the BOLD sensitivity was approximately 2.2-fold higher for a high spatial/temporal resolution (1 mm isotropic/0.4 s), multi-slice, echo planar acquisition, and approximately three- and six-fold higher for three-dimensional echo planar images acquired with isotropic resolutions of 0.7 and 0.55 mm, respectively. Improvements in parallel imaging performance (geometry factor) were up to around 1.5-fold with increasing acceleration factor, and improvements in fMRI detectability (temporal signal-to-noise ratio) were up to around four-fold depending on the distance to the coil. Although deeper lying structures may not benefit from the design, most fMRI questions pertain to the neocortex which lies within approximately 4 cm from the surface. These results suggest that the resolution of fMRI (at 7 T) can approximate levels that are closer to the spatial/temporal scale of the fundamental functional organization of the human cortex using a simple high-density coil design for high sensitivity. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Effect of subaperture beamforming on phase coherence imaging.

    PubMed

    Hasegawa, Hideyuki; Kanai, Hiroshi

    2014-11-01

    High-frame-rate echocardiography using unfocused transmit beams and parallel receive beamforming is a promising method for evaluation of cardiac function, such as imaging of rapid propagation of vibration of the heart wall resulting from electrical stimulation of the myocardium. In this technique, high temporal resolution is realized at the expense of spatial resolution and contrast. The phase coherence factor has been developed to improve spatial resolution and contrast in ultrasonography. It evaluates the variance in phases of echo signals received by individual transducer elements after delay compensation, as in the conventional delay-andsum beamforming process. However, the phase coherence factor suppresses speckle echoes because phases of speckle echoes fluctuate as a result of interference of echoes. In the present study, the receiving aperture was divided into several subapertures, and conventional delay-and-sum beamforming was performed with respect to each subaperture to suppress echoes from scatterers except for that at a focal point. After subaperture beamforming, the phase coherence factor was obtained from beamformed RF signals from respective subapertures. By means of this procedure, undesirable echoes, which can interfere with the echo from a focal point, can be suppressed by subaperture beamforming, and the suppression of the phase coherence factor resulting from phase fluctuation caused by such interference can be avoided. In the present study, the effect of subaperture beamforming in high-frame-rate echocardiography with the phase coherence factor was evaluated using a phantom. By applying subaperture beamforming, the average intensity of speckle echoes from a diffuse scattering medium was significantly higher (-39.9 dB) than that obtained without subaperture beamforming (-48.7 dB). As for spatial resolution, the width at half-maximum of the lateral echo amplitude profile obtained without the phase coherence factor was 1.06 mm. By using the phase coherence factor, spatial resolution was improved significantly, and subaperture beamforming achieved a better spatial resolution of 0.75 mm than that of 0.78 mm obtained without subaperture beamforming.

  7. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  8. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  9. Management of Transjugular Intrahepatic Portosystemic Shunt (TIPS)-associated Refractory Hepatic Encephalopathy by Shunt Reduction Using the Parallel Technique: Outcomes of a Retrospective Case Series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cookson, Daniel T., E-mail: danielthomascookson@yahoo.co.uk; Zaman, Zubayr; Gordon-Smith, James

    2011-02-15

    Purpose: To investigate the reproducibility and technical and clinical success of the parallel technique of transjugular intrahepatic portosystemic shunt (TIPS) reduction in the management of refractory hepatic encephalopathy (HE). Materials and Methods: A 10-mm-diameter self-expanding stent graft and a 5-6-mm-diameter balloon-expandable stent were placed in parallel inside the existing TIPS in 8 patients via a dual unilateral transjugular approach. Changes in portosystemic pressure gradient and HE grade were used as primary end points. Results: TIPS reduction was technically successful in all patients. Mean {+-} standard deviation portosystemic pressure gradient before and after shunt reduction was 4.9 {+-} 3.6 mmHg (range,more » 0-12 mmHg) and 10.5 {+-} 3.9 mmHg (range, 6-18 mmHg). Duration of follow-up was 137 {+-} 117.8 days (range, 18-326 days). Clinical improvement of HE occurred in 5 patients (62.5%) with resolution of HE in 4 patients (50%). Single episodes of recurrent gastrointestinal hemorrhage occurred in 3 patients (37.5%). These were self-limiting in 2 cases and successfully managed in 1 case by correction of coagulopathy and blood transfusion. Two of these patients (25%) died, one each of renal failure and hepatorenal failure. Conclusion: The parallel technique of TIPS reduction is reproducible and has a high technical success rate. A dual unilateral transjugular approach is advantageous when performing this procedure. The parallel technique allows repeat bidirectional TIPS adjustment and may be of significant clinical benefit in the management of refractory HE.« less

  10. TH-EF-BRA-11: Feasibility of Super-Resolution Time-Resolved 4DMRI for Multi-Breath Volumetric Motion Simulation in Radiotherapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Zakian, K; Deasy, J

    Purpose: To develop a novel super-resolution time-resolved 4DMRI technique to evaluate multi-breath, irregular and complex organ motion without respiratory surrogate for radiotherapy planning. Methods: The super-resolution time-resolved (TR) 4DMRI approach combines a series of low-resolution 3D cine MRI images acquired during free breathing (FB) with a high-resolution breath-hold (BH) 3DMRI via deformable image registration (DIR). Five volunteers participated in the study under an IRB-approved protocol. The 3D cine images with voxel size of 5×5×5 mm{sup 3} at two volumes per second (2Hz) were acquired coronally using a T1 fast field echo sequence, half-scan (0.8) acceleration, and SENSE (3) parallel imaging.more » Phase-encoding was set in the lateral direction to minimize motion artifacts. The BH image with voxel size of 2×2×2 mm{sup 3} was acquired using the same sequence within 10 seconds. A demons-based DIR program was employed to produce super-resolution 2Hz 4DMRI. Registration quality was visually assessed using difference images between TR 4DMRI and 3D cine and quantitatively assessed using average voxel correlation. The fidelity of the 3D cine images was assessed using a gel phantom and a 1D motion platform by comparing mobile and static images. Results: Owing to voxel intensity similarity using the same MRI scanning sequence, accurate DIR between FB and BH images is achieved. The voxel correlations between 3D cine and TR 4DMRI are greater than 0.92 in all cases and the difference images illustrate minimal residual error with little systematic patterns. The 3D cine images of the mobile gel phantom preserve object geometry with minimal scanning artifacts. Conclusion: The super-resolution time-resolved 4DMRI technique has been achieved via DIR, providing a potential solution for multi-breath motion assessment. Accurate DIR mapping has been achieved to map high-resolution BH images to low-resolution FB images, producing 2Hz volumetric high-resolution 4DMRI. Further validation and improvement are still required prior to clinical applications. This study is in part supported by the NIH (U54CA137788/U54CA132378).« less

  11. New photon-counting detectors for single-molecule fluorescence spectroscopy and imaging

    PubMed Central

    Michalet, X.; Colyer, R. A.; Scalia, G.; Weiss, S.; Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Villa, F.; Guerrieri, F.; Rech, I.; Gulinatti, A.; Tisa, S.; Zappa, F.; Ghioni, M.; Cova, S.

    2013-01-01

    Solution-based single-molecule fluorescence spectroscopy is a powerful new experimental approach with applications in all fields of natural sciences. Two typical geometries can be used for these experiments: point-like and widefield excitation and detection. In point-like geometries, the basic concept is to excite and collect light from a very small volume (typically femtoliter) and work in a concentration regime resulting in rare burst-like events corresponding to the transit of a single-molecule. Those events are accumulated over time to achieve proper statistical accuracy. Therefore the advantage of extreme sensitivity is somewhat counterbalanced by a very long acquisition time. One way to speed up data acquisition is parallelization. Here we will discuss a general approach to address this issue, using a multispot excitation and detection geometry that can accommodate different types of novel highly-parallel detector arrays. We will illustrate the potential of this approach with fluorescence correlation spectroscopy (FCS) and single-molecule fluorescence measurements. In widefield geometries, the same issues of background reduction and single-molecule concentration apply, but the duration of the experiment is fixed by the time scale of the process studied and the survival time of the fluorescent probe. Temporal resolution on the other hand, is limited by signal-to-noise and/or detector resolution, which calls for new detector concepts. We will briefly present our recent results in this domain. PMID:24729836

  12. New photon-counting detectors for single-molecule fluorescence spectroscopy and imaging.

    PubMed

    Michalet, X; Colyer, R A; Scalia, G; Weiss, S; Siegmund, Oswald H W; Tremsin, Anton S; Vallerga, John V; Villa, F; Guerrieri, F; Rech, I; Gulinatti, A; Tisa, S; Zappa, F; Ghioni, M; Cova, S

    2011-05-13

    Solution-based single-molecule fluorescence spectroscopy is a powerful new experimental approach with applications in all fields of natural sciences. Two typical geometries can be used for these experiments: point-like and widefield excitation and detection. In point-like geometries, the basic concept is to excite and collect light from a very small volume (typically femtoliter) and work in a concentration regime resulting in rare burst-like events corresponding to the transit of a single-molecule. Those events are accumulated over time to achieve proper statistical accuracy. Therefore the advantage of extreme sensitivity is somewhat counterbalanced by a very long acquisition time. One way to speed up data acquisition is parallelization. Here we will discuss a general approach to address this issue, using a multispot excitation and detection geometry that can accommodate different types of novel highly-parallel detector arrays. We will illustrate the potential of this approach with fluorescence correlation spectroscopy (FCS) and single-molecule fluorescence measurements. In widefield geometries, the same issues of background reduction and single-molecule concentration apply, but the duration of the experiment is fixed by the time scale of the process studied and the survival time of the fluorescent probe. Temporal resolution on the other hand, is limited by signal-to-noise and/or detector resolution, which calls for new detector concepts. We will briefly present our recent results in this domain.

  13. Multivariate curve resolution based chromatographic peak alignment combined with parallel factor analysis to exploit second-order advantage in complex chromatographic measurements.

    PubMed

    Parastar, Hadi; Akvan, Nadia

    2014-03-13

    In the present contribution, a new combination of multivariate curve resolution-correlation optimized warping (MCR-COW) with trilinear parallel factor analysis (PARAFAC) is developed to exploit second-order advantage in complex chromatographic measurements. In MCR-COW, the complexity of the chromatographic data is reduced by arranging the data in a column-wise augmented matrix, analyzing using MCR bilinear model and aligning the resolved elution profiles using COW in a component-wise manner. The aligned chromatographic data is then decomposed using trilinear model of PARAFAC in order to exploit pure chromatographic and spectroscopic information. The performance of this strategy is evaluated using simulated and real high-performance liquid chromatography-diode array detection (HPLC-DAD) datasets. The obtained results showed that the MCR-COW can efficiently correct elution time shifts of target compounds that are completely overlapped by coeluted interferences in complex chromatographic data. In addition, the PARAFAC analysis of aligned chromatographic data has the advantage of unique decomposition of overlapped chromatographic peaks to identify and quantify the target compounds in the presence of interferences. Finally, to confirm the reliability of the proposed strategy, the performance of the MCR-COW-PARAFAC is compared with the frequently used methods of PARAFAC, COW-PARAFAC, multivariate curve resolution-alternating least squares (MCR-ALS), and MCR-COW-MCR. In general, in most of the cases the MCR-COW-PARAFAC showed an improvement in terms of lack of fit (LOF), relative error (RE) and spectral correlation coefficients in comparison to the PARAFAC, COW-PARAFAC, MCR-ALS and MCR-COW-MCR results. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  15. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    NASA Astrophysics Data System (ADS)

    Doisneau, François; Arienti, Marco; Oefelein, Joseph C.

    2017-01-01

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier-Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle-particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.

  16. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barelymore » influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.« less

  17. Electromagnetic diagnostics of ECR-Ion Sources plasmas: optical/X-ray imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mascali, D.; Castro, G.; Altana, C.; Caliri, C.; Mazzaglia, M.; Romano, F. P.; Leone, F.; Musumarra, A.; Naselli, E.; Reitano, R.; Torrisi, G.; Celona, L.; Cosentino, L. G.; Giarrusso, M.; Gammino, S.

    2017-12-01

    Magnetoplasmas in ECR-Ion Sources are excited from gaseous elements or vapours by microwaves in the range 2.45-28 GHz via Electron Cyclotron Resonance. A B-minimum, magnetohydrodynamic stable configuration is used for trapping the plasma. The values of plasma density, temperature and confinement times are typically ne= 1011-1013 cm-3, 01 eV

  18. First Keck Interferometer measurements in self-phase referencing mode: spatially resolving circum-stellar line emission of 48 Lib

    NASA Astrophysics Data System (ADS)

    Pott, J.-U.; Woillez, J.; Ragland, S.; Wizinowich, P. L.; Eisner, J. A.; Monnier, J. D.; Akeson, R. L.; Ghez, A. M.; Graham, J. R.; Hillenbrand, L. A.; Millan-Gabet, R.; Appleby, E.; Berkey, B.; Colavita, M. M.; Cooper, A.; Felizardo, C.; Herstein, J.; Hrynevych, M.; Medeiros, D.; Morrison, D.; Panteleeva, T.; Smith, B.; Summers, K.; Tsubota, K.; Tyau, C.; Wetherell, E.

    2010-07-01

    Recently, the Keck interferometer was upgraded to do self-phase-referencing (SPR) assisted K-band spectroscopy at R ~ 2000. This means, combining a spectral resolution of 150 km/s with an angular resolution of 2.7 mas, while maintaining high sensitiviy. This SPR mode operates two fringe trackers in parallel, and explores several infrastructural requirements for off-axis phase-referencing, as currently being implemented as the KI-ASTRA project. The technology of self-phasereferencing opens the way to reach very high spectral resolution in near-infrared interferometry. We present the scientific capabilities of the KI-SPR mode in detail, at the example of observations of the Be-star 48 Lib. Several spectral lines of the cirumstellar disk are resolved. We describe the first detection of Pfund-lines in an interferometric spectrum of a Be star, in addition to Br γ. The differential phase signal can be used to (i) distinguish circum-stellar line emission from the star, (ii) to directly measure line asymmetries tracing an asymetric gas density distribution, (iii) to reach a differential, astrometric precision beyond single-telescope limits sufficient for studying the radial disk structure. Our data support the existence of a radius-dependent disk density perturbation, typically used to explain slow variations of Be-disk hydrogen line profiles.

  19. Comparison of Cornea Module and DermaInspect for noninvasive imaging of ocular surface pathologies

    NASA Astrophysics Data System (ADS)

    Steven, Philipp; Müller, Maya; Koop, Norbert; Rose, Christian; Hüttmann, Gereon

    2009-11-01

    Minimally invasive imaging of ocular surface pathologies aims at securing clinical diagnosis without actual tissue probing. For this matter, confocal microscopy (Cornea Module) is in daily use in ophthalmic practice. Multiphoton microscopy is a new optical technique that enables high-resolution imaging and functional analysis of living tissues based on tissue autofluorescence. This study was set up to compare the potential of a multiphoton microscope (DermaInspect) to the Cornea Module. Ocular surface pathologies such as pterygia, papillomae, and nevi were investigated in vivo using the Cornea Module and imaged immediately after excision by DermaInspect. Two excitation wavelengths, fluorescence lifetime imaging and second-harmonic generation (SHG), were used to discriminate different tissue structures. Images were compared with the histopathological assessment of the samples. At wavelengths of 730 nm, multiphoton microscopy exclusively revealed cellular structures. Collagen fibrils were specifically demonstrated by second-harmonic generation. Measurements of fluorescent lifetimes enabled the highly specific detection of goblet cells, erythrocytes, and nevus-cell clusters. At the settings used, DermaInspect reaches higher resolutions than the Cornea Module and obtains additional structural information. The parallel detection of multiphoton excited autofluorescence and confocal imaging could expand the possibilities of minimally invasive investigation of the ocular surface toward functional analysis at higher resolutions.

  20. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Meili; Cobb, John W; Hagen, Mark E

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less

  1. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  2. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Precise measurements of droplet-droplet contact forces in quasi-2D emulsions

    NASA Astrophysics Data System (ADS)

    Lowensohn, Janna; Orellana, Carlos; Weeks, Eric

    2015-03-01

    We use microscopy to visualize a quasi-2D oil-in-water emulsion confined between two parallel slides. We then use the droplet shapes to infer the forces they exert on each other. To calibrate our force law, we set up an emulsion in a tilted sample chamber so that the droplets feel a known buoyant force. By correlating radius of the droplet and length of contacts with the buoyant forces, we validate our empirical force law. We improve upon prior work in our lab by using a high-resolution camera to image each droplet multiple times, thus providing sub-pixel resolution and reducing the noise. Our new technique identifies contact forces with only a 1% uncertainty, five times better than prior work. We demonstrate the utility of our technique by examining the normal modes of the droplet contact network in our samples.

  4. In vivo and ex vivo imaging with ultrahigh resolution full-field OCT

    NASA Astrophysics Data System (ADS)

    Grieve, Kate; Moneron, Gael; Schwartz, Wilfrid; Boccara, Albert C.; Dubois, Arnaud

    2005-08-01

    Imaging of in vivo and ex vivo biological samples using full-field optical coherence tomography is demonstrated. Three variations on the original full-field optical coherence tomography instrument are presented, and evaluated in terms of performance. The instruments are based on the Linnik interferometer illuminated by a white light source. Images in the en face orientation are obtained in real-time without scanning by using a two-dimensional parallel detector array. An isotropic resolution capability better than 1 μm is achieved thanks to the use of a broad spectrum source and high numerical aperture microscope objectives. Detection sensitivity up to 90 dB is demonstrated. Image acquisition times as short as 10 μs per en face image are possible. A variety of in vivo and ex vivo imaging applications is explored, particularly in the fields of embryology, ophthalmology and botany.

  5. Improved image reconstruction of low-resolution multichannel phase contrast angiography

    PubMed Central

    P. Krishnan, Akshara; Joy, Ajin; Paul, Joseph Suresh

    2016-01-01

    Abstract. In low-resolution phase contrast magnetic resonance angiography, the maximum intensity projected channel images will be blurred with consequent loss of vascular details. The channel images are enhanced using a stabilized deblurring filter, applied to each channel prior to combining the individual channel images. The stabilized deblurring is obtained by the addition of a nonlocal regularization term to the reverse heat equation, referred to as nonlocally stabilized reverse diffusion filter. Unlike reverse diffusion filter, which is highly unstable and blows up noise, nonlocal stabilization enhances intensity projected parallel images uniformly. Application to multichannel vessel enhancement is illustrated using both volunteer data and simulated multichannel angiograms. Robustness of the filter applied to volunteer datasets is shown using statistically validated improvement in flow quantification. Improved performance in terms of preserving vascular structures and phased array reconstruction in both simulated and real data is demonstrated using structureness measure and contrast ratio. PMID:26835501

  6. Misalignments calibration in small-animal PET scanners based on rotating planar detectors and parallel-beam geometry.

    PubMed

    Abella, M; Vicente, E; Rodríguez-Ruano, A; España, S; Lage, E; Desco, M; Udias, J M; Vaquero, J J

    2012-11-21

    Technological advances have improved the assembly process of PET detectors, resulting in quite small mechanical tolerances. However, in high-spatial-resolution systems, even submillimetric misalignments of the detectors may lead to a notable degradation of image resolution and artifacts. Therefore, the exact characterization of misalignments is critical for optimum reconstruction quality in such systems. This subject has been widely studied for CT and SPECT scanners based on cone beam geometry, but this is not the case for PET tomographs based on rotating planar detectors. The purpose of this work is to analyze misalignment effects in these systems and to propose a robust and easy-to-implement protocol for geometric characterization. The result of the proposed calibration method, which requires no more than a simple calibration phantom, can then be used to generate a correct 3D-sinogram from the acquired list mode data.

  7. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  8. Photonic crystal biosensor microplates with integrated fluid networks for high throughput applications in drug discovery

    NASA Astrophysics Data System (ADS)

    Choi, Charles J.; Chan, Leo L.; Pineda, Maria F.; Cunningham, Brian T.

    2007-09-01

    Assays used in pharmaceutical research require a system that can not only detect biochemical interactions with high sensitivity, but that can also perform many measurements in parallel while consuming low volumes of reagents. While nearly all label-free biosensor transducers to date have been interfaced with a flow channel, the liquid handling system is typically aligned and bonded to the transducer for supplying analytes to only a few sensors in parallel. In this presentation, we describe a fabrication approach for photonic crystal biosensors that utilizes nanoreplica molding to produce a network of sensors that are automatically self-aligned with a microfluidic network in a single process step. The sensor/fluid network is inexpensively produced on large surface areas upon flexible plastic substrates, allowing the device to be incorporated into standard format 96-well microplates. A simple flow scheme using hydrostatic pressure applied through a single control point enables immobilization of capture ligands upon a large number of sensors with 220 nL of reagent, and subsequent exposure of the sensors to test samples. A high resolution imaging detection instrument is capable of monitoring the binding within parallel channels at rates compatible with determining kinetic binding constants between the immobilized ligands and the analytes. The first implementation of this system is capable of monitoring the kinetic interactions of 11 flow channels at once, and a total of 88 channels within an integrated biosensor microplate in rapid succession. The system was initially tested to characterize the interaction between sets of proteins with known binding behavior.

  9. Un-collimated single-photon imaging system for high-sensitivity small animal and plant imaging.

    PubMed

    Walker, Katherine L; Judenhofer, Martin S; Cherry, Simon R; Mitchell, Gregory S

    2015-01-07

    In preclinical single-photon emission computed tomography (SPECT) system development the primary objective has been to improve spatial resolution by using novel parallel-hole or multi-pinhole collimator geometries. However, such high-resolution systems have relatively poor sensitivity (typically 0.01-0.1%). In contrast, a system that does not use collimators can achieve very high-sensitivity. Here we present a high-sensitivity un-collimated detector single-photon imaging (UCD-SPI) system for the imaging of both small animals and plants. This scanner consists of two thin, closely spaced, pixelated scintillator detectors that use NaI(Tl), CsI(Na), or BGO. The performance of the system has been characterized by measuring sensitivity, spatial resolution, linearity, detection limits, and uniformity. With (99m)Tc (140 keV) at the center of the field of view (20 mm scintillator separation), the sensitivity was measured to be 31.8% using the NaI(Tl) detectors and 40.2% with CsI(Na). The best spatial resolution (FWHM when the image formed as the geometric mean of the two detector heads, 20 mm scintillator separation) was 19.0 mm for NaI(Tl) and 11.9 mm for CsI(Na) at 140 keV, and 19.5 mm for BGO at 1116 keV, which is somewhat degraded compared to the cm-scale resolution obtained with only one detector head and a close source. The quantitative accuracy of the system's linearity is better than 2% with detection down to activity levels of 100 nCi. Two in vivo animal studies (a renal scan using (99m)Tc MAG-3 and a thyroid scan with (123)I) and one plant study (a (99m)TcO4(-) xylem transport study) highlight the unique capabilities of this UCD-SPI system. From the renal scan, we observe approximately a one thousand-fold increase in sensitivity compared to the Siemens Inveon SPECT/CT scanner. UCD-SPI is useful for many imaging tasks that do not require excellent spatial resolution, such as high-throughput screening applications, simple radiotracer uptake studies in tumor xenografts, dynamic studies where very good temporal resolution is critical, or in planta imaging of radioisotopes at low concentrations.

  10. Un-collimated single-photon imaging system for high-sensitivity small animal and plant imaging

    DOE PAGES

    Walker, Katherine L.; Judenhofer, Martin S.; Cherry, Simon R.; ...

    2014-12-12

    In preclinical single-photon emission computed tomography (SPECT) system development the primary objective has been to improve spatial resolution by using novel parallel-hole or multi-pinhole collimator geometries. Furthermore, such high-resolution systems have relatively poor sensitivity (typically 0.01% to 0.1%). In contrast, a system that does not use collimators can achieve very high-sensitivity. Here we present a high-sensitivity un-collimated detector single-photon imaging (UCD-SPI) system for the imaging of both small animals and plants. This scanner consists of two thin, closely spaced, pixelated scintillator detectors that use NaI(Tl), CsI(Na), or BGO. The performance of the system has been characterized by measuring sensitivity, spatialmore » resolution, linearity, detection limits, and uniformity. With 99mTc (140 keV) at the center of the field of view (20 mm scintillator separation), the sensitivity was measured to be 31.8% using the NaI(Tl) detectors and 40.2% with CsI(Na). The best spatial resolution (FWHM when the image formed as the geometric mean of the two detector heads, 20 mm scintillator separation) was 19.0 mm for NaI(Tl) and 11.9 mm for CsI(Na) at 140 keV, and 19.5 mm for BGO at 1116 keV, which is somewhat degraded compared to the cm-scale resolution obtained with only one detector head and a close source. The quantitative accuracy of the system’s linearity is better than 2% with detection down to activity levels of 100 nCi. Two in vivo animal studies (a renal scan using 99mTc MAG-3 and a thyroid scan with 123I) and one plant study (a 99mTcO 4- xylem transport study) highlight the unique capabilities of this UCD-SPI system. From the renal scan, we observe approximately a one thousand-fold increase in sensitivity compared to the Siemens Inveon SPECT/CT scanner. In conclusion, UCD-SPI is useful for many imaging tasks that do not require excellent spatial resolution, such as high-throughput screening applications, simple radiotracer uptake studies in tumor xenografts, dynamic studies where very good temporal resolution is critical, or in planta imaging of radioisotopes at low concentrations.« less

  11. Un-collimated single-photon imaging system for high-sensitivity small animal and plant imaging

    NASA Astrophysics Data System (ADS)

    Walker, Katherine L.; Judenhofer, Martin S.; Cherry, Simon R.; Mitchell, Gregory S.

    2015-01-01

    In preclinical single-photon emission computed tomography (SPECT) system development the primary objective has been to improve spatial resolution by using novel parallel-hole or multi-pinhole collimator geometries. However, such high-resolution systems have relatively poor sensitivity (typically 0.01-0.1%). In contrast, a system that does not use collimators can achieve very high-sensitivity. Here we present a high-sensitivity un-collimated detector single-photon imaging (UCD-SPI) system for the imaging of both small animals and plants. This scanner consists of two thin, closely spaced, pixelated scintillator detectors that use NaI(Tl), CsI(Na), or BGO. The performance of the system has been characterized by measuring sensitivity, spatial resolution, linearity, detection limits, and uniformity. With 99mTc (140 keV) at the center of the field of view (20 mm scintillator separation), the sensitivity was measured to be 31.8% using the NaI(Tl) detectors and 40.2% with CsI(Na). The best spatial resolution (FWHM when the image formed as the geometric mean of the two detector heads, 20 mm scintillator separation) was 19.0 mm for NaI(Tl) and 11.9 mm for CsI(Na) at 140 keV, and 19.5 mm for BGO at 1116 keV, which is somewhat degraded compared to the cm-scale resolution obtained with only one detector head and a close source. The quantitative accuracy of the system’s linearity is better than 2% with detection down to activity levels of 100 nCi. Two in vivo animal studies (a renal scan using 99mTc MAG-3 and a thyroid scan with 123I) and one plant study (a 99mTcO4- xylem transport study) highlight the unique capabilities of this UCD-SPI system. From the renal scan, we observe approximately a one thousand-fold increase in sensitivity compared to the Siemens Inveon SPECT/CT scanner. UCD-SPI is useful for many imaging tasks that do not require excellent spatial resolution, such as high-throughput screening applications, simple radiotracer uptake studies in tumor xenografts, dynamic studies where very good temporal resolution is critical, or in planta imaging of radioisotopes at low concentrations.

  12. Resolution Enhancement in PET Reconstruction Using Collimation

    NASA Astrophysics Data System (ADS)

    Metzler, Scott D.; Matej, Samuel; Karp, Joel S.

    2013-02-01

    Collimation can improve both the spatial resolution and sampling properties compared to the same scanner without collimation. Spatial resolution improves because each original crystal can be conceptually split into two (i.e., doubling the number of in-plane crystals) by masking half the crystal with a high-density attenuator (e.g., tungsten); this reduces coincidence efficiency by 4× since both crystals comprising the line of response (LOR) are masked, but yields 4× as many resolution-enhanced (RE) LORs. All the new RE LORs can be measured by scanning with the collimator in different configurations.In this simulation study, the collimator was assumed to be ideal, neither allowing gamma penetration nor truncating the field of view. Comparisons were made in 2D between an uncollimated small-animal system with 2-mm crystals that were assumed to be perfectly absorbing and the same system with collimation that narrowed the effective crystal size to 1 mm. Digital phantoms included a hot-rod and a single-hot-spot, both in a uniform background with activity ratio of 4:1. In addition to the collimated and uncollimated configurations, angular and spatial wobbling acquisitions of the 2-mm case were also simulated. Similarly, configurations with different combinations of the RE LORs were considered including (i) all LORs, (ii) only those parallel to the 2-mm LORs; and (iii) only cross pairs that are not parallel to the 2-mm LORs. Lastly, quantitative studies were conducted for collimated and uncollimated data using contrast recovery coefficient and mean-squared error (MSE) as metrics. The reconstructions show that for most noise levels there is a substantial improvement in image quality (i.e., visual quality, resolution, and a reduction in artifacts) by using collimation even when there are 4 fewer counts or-in some cases-comparing with the noiseless uncollimated reconstruction. By comparing various configurations of sampling, the results show that it is the matched combination of both improved spatial resolution of each LOR and the increase in the number of LORs that yields improved reconstructions. Further, the quantitative studies show that for low-count scans, the collimated data give better MSE for small lesions and the uncollimated data give better MSE for larger lesions; for highcount studies, the collimated data yield better quantitative values for the entire range of lesion sizes that were evaluated.

  13. Seismic responses and controlling factors of Miocene deepwater gravity-flow deposits in Block A, Lower Congo Basin

    NASA Astrophysics Data System (ADS)

    Wang, Linlin; Wang, Zhenqi; Yu, Shui; Ngia, Ngong Roger

    2016-08-01

    The Miocene deepwater gravity-flow sedimentary system in Block A of the southwestern part of the Lower Congo Basin was identified and interpreted using high-resolution 3-D seismic, drilling and logging data to reveal development characteristics and main controlling factors. Five types of deepwater gravity-flow sedimentary units have been identified in the Miocene section of Block A, including mass transport, deepwater channel, levee, abandoned channel and sedimentary lobe deposits. Each type of sedimentary unit has distinct external features, internal structures and lateral characteristics in seismic profiles. Mass transport deposits (MTDs) in particular correspond to chaotic low-amplitude reflections in contact with mutants on both sides. The cross section of deepwater channel deposits in the seismic profile is in U- or V-shape. The channel deposits change in ascending order from low-amplitude, poor-continuity, chaotic filling reflections at the bottom, to high-amplitude, moderate to poor continuity, chaotic or sub-parallel reflections in the middle section and to moderate-weak amplitude, good continuity, parallel or sub-parallel reflections in the upper section. The sedimentary lobes are laterally lobate, which corresponds to high-amplitude, good-continuity, moundy reflection signatures in the seismic profile. Due to sediment flux, faults, and inherited terrain, few mass transport deposits occur in the northeastern part of the study area. The front of MTDs is mainly composed of channel-levee complex deposits, while abandoned-channel and lobe-deposits are usually developed in high-curvature channel sections and the channel terminals, respectively. The distribution of deepwater channel, levee, abandoned channel and sedimentary lobe deposits is predominantly controlled by relative sea level fluctuations and to a lesser extent by tectonism and inherited terrain.

  14. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masters, N D; Kaiser, T B; Anderson, R W

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  15. Plexus structure imaging with thin slab MR neurography: rotating frames, fly-throughs, and composite projections

    NASA Astrophysics Data System (ADS)

    Raphael, David T.; McIntee, Diane; Tsuruda, Jay S.; Colletti, Patrick; Tatevossian, Raymond; Frazier, James

    2006-03-01

    We explored multiple image processing approaches by which to display the segmented adult brachial plexus in a three-dimensional manner. Magnetic resonance neurography (MRN) 1.5-Tesla scans with STIR sequences, which preferentially highlight nerves, were performed in adult volunteers to generate high-resolution raw images. Using multiple software programs, the raw MRN images were then manipulated so as to achieve segmentation of plexus neurovascular structures, which were incorporated into three different visualization schemes: rotating upper thoracic girdle skeletal frames, dynamic fly-throughs parallel to the clavicle, and thin slab volume-rendered composite projections.

  16. Fan-beam scanning laser optical computed tomography for large volume dosimetry

    NASA Astrophysics Data System (ADS)

    Dekker, K. H.; Battista, J. J.; Jordan, K. J.

    2017-05-01

    A prototype scanning-laser fan beam optical CT scanner is reported which is capable of high resolution, large volume dosimetry with reasonable scan time. An acylindrical, asymmetric aquarium design is presented which serves to 1) generate parallel-beam scan geometry, 2) focus light towards a small acceptance angle detector, and 3) avoid interference fringe-related artifacts. Preliminary experiments with uniform solution phantoms (11 and 15 cm diameter) and finger phantoms (13.5 mm diameter FEP tubing) demonstrate that the design allows accurate optical CT imaging, with optical CT measurements agreeing within 3% of independent Beer-Lambert law calculations.

  17. GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heymann, Frank; Siebenmorgen, Ralf, E-mail: fheymann@pa.uky.edu

    2012-05-20

    A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting proceduremore » and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 {mu}m silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.« less

  18. "What Should I Be When I Grow Up?" Helping Gifted Children Set Lifelong Goals

    ERIC Educational Resources Information Center

    Lindbom-Cho, Desiree R.

    2013-01-01

    The new year brings about one's desire to change and to improve one's self. These emotions quickly fade and turn into lofty resolutions that are not fulfilled. For parents of gifted children, many parallels can be made between making New Year's resolutions and setting more long-term goals related to their education and/or career.…

  19. Laser tweezer actuated microphotonic array devices for high resolution imaging and analysis in chip-based biosystems

    NASA Astrophysics Data System (ADS)

    Birkbeck, Aaron L.

    A new technology is developed that functionally integrates arrays of lasers and micro-optics into microfluidic systems for the purpose of imaging, analyzing, and manipulating objects and biological cells. In general, the devices and technologies emerging from this area either lack functionality through the reliance on mechanical systems or provide a serial-based, time consuming approach. As compared to the current state of art, our all-optical design methodology has several distinguishing features, such as parallelism, high efficiency, low power, auto-alignment, and high yield fabrication methods, which all contribute to minimizing the cost of the integration process. The potential use of vertical cavity surface emitting lasers (VCSELs) for the creation of two-dimensional arrays of laser optical tweezers that perform independently controlled, parallel capture, and transport of large numbers of individual objects and biological cells is investigated. One of the primary biological applications for which VCSEL array sourced laser optical tweezers are considered is the formation of engineered tissues through the manipulation and spatial arrangement of different types of cells in a co-culture. Creating devices that combine laser optical tweezers with select micro-optical components permits optical imaging and analysis functions to take place inside the microfluidic channel. One such device is a micro-optical spatial filter whose motion and alignment is controlled using a laser optical tweezer. Unlike conventional spatial filter systems, our device utilizes a refractive optical element that is directly incorporated onto the lithographically patterned spatial filter. This allows the micro-optical spatial filter to automatically align itself in three-dimensions to the focal point of the microscope objective, where it then filters out the higher frequency additive noise components present in the laser beam. As a means of performing high resolution imaging in the microfluidic channel, we developed a novel technique that integrates the capacity of a laser tweezer to optically trap and manipulate objects in three-dimensions with the resolution-enhanced imaging capabilities of a solid immersion lens (SIL). In our design, the SIL is a free-floating device whose imaging beam, motion control and alignment is provided by a laser optical tweezer, which allows the microfluidic SIL to image in areas that are inaccessible to traditional solid immersion microscopes.

  20. A massively parallel adaptive scheme for melt migration in geodynamics computations

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo; Grove, Ryan

    2016-04-01

    Melt generation and migration are important processes for the evolution of the Earth's interior and impact the global convection of the mantle. While they have been the subject of numerous investigations, the typical time and length-scales of melt transport are vastly different from global mantle convection, which determines where melt is generated. This makes it difficult to study mantle convection and melt migration in a unified framework. In addition, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. We describe our extension of the community mantle convection code ASPECT that adds equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects, and it incorporates the individual compressibilities of the solid and the fluid phase. For this, we derive an accurate and stable Finite Element scheme that can be combined with adaptive mesh refinement. This is particularly advantageous for this type of problem, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. Furthermore, scalable iterative linear solvers are required to solve the large linear systems arising from the discretized system. Finally, we present benchmarks and scaling tests of our solver up to tens of thousands of cores, show the effectiveness of adaptive mesh refinement when applied to melt migration and compare the compressible and incompressible formulation. We then apply our software to large-scale 3d simulations of melting and melt transport in mantle plumes interacting with the lithosphere. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. The presented implementation is available online under an Open Source license together with an extensive documentation.

Top