A 2D MTF approach to evaluate and guide dynamic imaging developments.
Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno
2010-02-01
As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.
Parallel Reconstruction Using Null Operations (PRUNO)
Zhang, Jian; Liu, Chunlei; Moseley, Michael E.
2011-01-01
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290
Lingala, Sajan Goud; Zhu, Yinghua; Lim, Yongwan; Toutios, Asterios; Ji, Yunhua; Lo, Wei-Ching; Seiberlich, Nicole; Narayanan, Shrikanth; Nayak, Krishna S
2017-12-01
To evaluate the feasibility of through-time spiral generalized autocalibrating partial parallel acquisition (GRAPPA) for low-latency accelerated real-time MRI of speech. Through-time spiral GRAPPA (spiral GRAPPA), a fast linear reconstruction method, is applied to spiral (k-t) data acquired from an eight-channel custom upper-airway coil. Fully sampled data were retrospectively down-sampled to evaluate spiral GRAPPA at undersampling factors R = 2 to 6. Pseudo-golden-angle spiral acquisitions were used for prospective studies. Three subjects were imaged while performing a range of speech tasks that involved rapid articulator movements, including fluent speech and beat-boxing. Spiral GRAPPA was compared with view sharing, and a parallel imaging and compressed sensing (PI-CS) method. Spiral GRAPPA captured spatiotemporal dynamics of vocal tract articulators at undersampling factors ≤4. Spiral GRAPPA at 18 ms/frame and 2.4 mm 2 /pixel outperformed view sharing in depicting rapidly moving articulators. Spiral GRAPPA and PI-CS provided equivalent temporal fidelity. Reconstruction latency per frame was 14 ms for view sharing and 116 ms for spiral GRAPPA, using a single processor. Spiral GRAPPA kept up with the MRI data rate of 18ms/frame with eight processors. PI-CS required 17 minutes to reconstruct 5 seconds of dynamic data. Spiral GRAPPA enabled 4-fold accelerated real-time MRI of speech with a low reconstruction latency. This approach is applicable to wide range of speech RT-MRI experiments that benefit from real-time feedback while visualizing rapid articulator movement. Magn Reson Med 78:2275-2282, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Simultaneous orthogonal plane imaging.
Mickevicius, Nikolai J; Paulson, Eric S
2017-11-01
Intrafraction motion can result in a smearing of planned external beam radiation therapy dose distributions, resulting in an uncertainty in dose actually deposited in tissue. The purpose of this paper is to present a pulse sequence that is capable of imaging a moving target at a high frame rate in two orthogonal planes simultaneously for MR-guided radiotherapy. By balancing the zero gradient moment on all axes, slices in two orthogonal planes may be spatially encoded simultaneously. The orthogonal slice groups may be acquired with equal or nonequal echo times. A Cartesian spoiled gradient echo simultaneous orthogonal plane imaging (SOPI) sequence was tested in phantom and in vivo. Multiplexed SOPI acquisitions were performed in which two parallel slices were imaged along two orthogonal axes simultaneously. An autocalibrating phase-constrained 2D-SENSE-GRAPPA (generalized autocalibrating partially parallel acquisition) algorithm was implemented to reconstruct the multiplexed data. SOPI images without intraslice motion artifacts were reconstructed at a maximum frame rate of 8.16 Hz. The 2D-SENSE-GRAPPA reconstruction separated the parallel slices aliased along each orthogonal axis. The high spatiotemporal resolution provided by SOPI has the potential to be beneficial for intrafraction motion management during MR-guided radiation therapy or other MRI-guided interventions. Magn Reson Med 78:1700-1710, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.
Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua
2018-03-01
To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael
2018-03-09
To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.
Automatic Camera Orientation and Structure Recovery with Samantha
NASA Astrophysics Data System (ADS)
Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.
2011-09-01
SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
Measuring signal-to-noise ratio in partially parallel imaging MRI
Goerner, Frank L.; Clarke, Geoffrey D.
2011-01-01
Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049
Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.
2013-01-01
Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345
3D hyperpolarized C-13 EPI with calibrationless parallel imaging
NASA Astrophysics Data System (ADS)
Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.
2018-04-01
With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.
Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha
2006-11-01
Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.
Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard
2006-09-01
The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.
Autocalibration method for non-stationary CT bias correction.
Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José
2018-02-01
Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.
Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto
2007-01-01
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Tsai, Shang-Yueh; Otazo, Ricardo; Posse, Stefan; Lin, Yi-Ru; Chung, Hsiao-Wen; Wald, Lawrence L; Wiggins, Graham C; Lin, Fa-Hsuan
2008-05-01
Parallel imaging has been demonstrated to reduce the encoding time of MR spectroscopic imaging (MRSI). Here we investigate up to 5-fold acceleration of 2D proton echo planar spectroscopic imaging (PEPSI) at 3T using generalized autocalibrating partial parallel acquisition (GRAPPA) with a 32-channel coil array, 1.5 cm(3) voxel size, TR/TE of 15/2000 ms, and 2.1 Hz spectral resolution. Compared to an 8-channel array, the smaller RF coil elements in this 32-channel array provided a 3.1-fold and 2.8-fold increase in signal-to-noise ratio (SNR) in the peripheral region and the central region, respectively, and more spatial modulated information. Comparison of sensitivity-encoding (SENSE) and GRAPPA reconstruction using an 8-channel array showed that both methods yielded similar quantitative metabolite measures (P > 0.1). Concentration values of N-acetyl-aspartate (NAA), total creatine (tCr), choline (Cho), myo-inositol (mI), and the sum of glutamate and glutamine (Glx) for both methods were consistent with previous studies. Using the 32-channel array coil the mean Cramer-Rao lower bounds (CRLB) were less than 8% for NAA, tCr, and Cho and less than 15% for mI and Glx at 2-fold acceleration. At 4-fold acceleration the mean CRLB for NAA, tCr, and Cho was less than 11%. In conclusion, the use of a 32-channel coil array and GRAPPA reconstruction can significantly reduce the measurement time for mapping brain metabolites. (c) 2008 Wiley-Liss, Inc.
Parallel PWMs Based Fully Digital Transmitter with Wide Carrier Frequency Range
Zhou, Bo; Zhang, Kun; Zhou, Wenbiao; Zhang, Yanjun; Liu, Dake
2013-01-01
The carrier-frequency (CF) and intermediate-frequency (IF) pulse-width modulators (PWMs) based on delay lines are proposed, where baseband signals are conveyed by both positions and pulse widths or densities of the carrier clock. By combining IF-PWM and precorrected CF-PWM, a fully digital transmitter with unit-delay autocalibration is implemented in 180 nm CMOS for high reconfiguration. The proposed architecture achieves wide CF range of 2 M–1 GHz, high power efficiency of 70%, and low error vector magnitude (EVM) of 3%, with spectrum purity of 20 dB optimized in comparison to the existing designs. PMID:24223503
SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space
Lustig, Michael; Pauly, John M.
2010-01-01
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790
Three-dimensional through-time radial GRAPPA for renal MR angiography.
Wright, Katherine L; Lee, Gregory R; Ehses, Philipp; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole
2014-10-01
To achieve high temporal and spatial resolution for contrast-enhanced time-resolved MR angiography exams (trMRAs), fast imaging techniques such as non-Cartesian parallel imaging must be used. In this study, the three-dimensional (3D) through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) method is used to reconstruct highly accelerated stack-of-stars data for time-resolved renal MRAs. Through-time radial GRAPPA has been recently introduced as a method for non-Cartesian GRAPPA weight calibration, and a similar concept can also be used in 3D acquisitions. By combining different sources of calibration information, acquisition time can be reduced. Here, different GRAPPA weight calibration schemes are explored in simulation, and the results are applied to reconstruct undersampled stack-of-stars data. Simulations demonstrate that an accurate and efficient approach to 3D calibration is to combine a small number of central partitions with as many temporal repetitions as exam time permits. These findings were used to reconstruct renal trMRA data with an in-plane acceleration factor as high as 12.6 with respect to the Nyquist sampling criterion, where the lowest root mean squared error value of 16.4% was achieved when using a calibration scheme with 8 partitions, 16 repetitions, and a 4 projection × 8 read point segment size. 3D through-time radial GRAPPA can be used to successfully reconstruct highly accelerated non-Cartesian data. By using in-plane radial undersampling, a trMRA can be acquired with a temporal footprint less than 4s/frame with a spatial resolution of approximately 1.5 mm × 1.5 mm × 3 mm. © 2014 Wiley Periodicals, Inc.
Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza
2018-02-27
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.
Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza
2018-01-01
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434
Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.
Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S
2017-11-01
To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea
2012-01-01
Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.
Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea
2012-01-01
Brain–computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP–BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP–BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP–BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix. PMID:22833713
Duan, Jizhong; Liu, Yu; Jing, Peiguang
2018-02-01
Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.
Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.
Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R
2012-01-01
We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.
Dynamic 2D self-phase-map Nyquist ghost correction for simultaneous multi-slice echo planar imaging.
Yarach, Uten; Tung, Yi-Hang; Setsompop, Kawin; In, Myung-Ho; Chatnuntawech, Itthi; Yakupov, Renat; Godenschweger, Frank; Speck, Oliver
2018-02-09
To develop a reconstruction pipeline that intrinsically accounts for both simultaneous multislice echo planar imaging (SMS-EPI) reconstruction and dynamic slice-specific Nyquist ghosting correction in time-series data. After 1D slice-group average phase correction, the separate polarity (i.e., even and odd echoes) SMS-EPI data were unaliased by slice GeneRalized Autocalibrating Partial Parallel Acquisition. Both the slice-unaliased even and odd echoes were jointly reconstructed using a model-based framework, extended for SMS-EPI reconstruction that estimates a 2D self-phase map, corrects dynamic slice-specific phase errors, and combines data from all coils and echoes to obtain the final images. The percentage ghost-to-signal ratios (%GSRs) and its temporal variations for MB3R y 2 with a field of view/4 shift in a human brain obtained by the proposed dynamic 2D and standard 1D phase corrections were 1.37 ± 0.11 and 2.66 ± 0.16, respectively. Even with a large regularization parameter λ applied in the proposed reconstruction, the smoothing effect in fMRI activation maps was comparable to a very small Gaussian kernel size 1 × 1 × 1 mm 3 . The proposed reconstruction pipeline reduced slice-specific phase errors in SMS-EPI, resulting in reduction of GSR. It is applicable for functional MRI studies because the smoothing effect caused by the regularization parameter selection can be minimal in a blood-oxygen-level-dependent activation map. © 2018 International Society for Magnetic Resonance in Medicine.
Holdsworth, Samantha J; Aksoy, Murat; Newbould, Rexford D; Yeom, Kristen; Van, Anh T; Ooi, Melvyn B; Barnes, Patrick D; Bammer, Roland; Skare, Stefan
2012-10-01
To develop and implement a clinical DTI technique suitable for the pediatric setting that retrospectively corrects for large motion without the need for rescanning and/or reacquisition strategies, and to deliver high-quality DTI images (both in the presence and absence of large motion) using procedures that reduce image noise and artifacts. We implemented an in-house built generalized autocalibrating partially parallel acquisitions (GRAPPA)-accelerated diffusion tensor (DT) echo-planar imaging (EPI) sequence at 1.5T and 3T on 1600 patients between 1 month and 18 years old. To reconstruct the data, we developed a fully automated tailored reconstruction software that selects the best GRAPPA and ghost calibration weights; does 3D rigid-body realignment with importance weighting; and employs phase correction and complex averaging to lower Rician noise and reduce phase artifacts. For select cases we investigated the use of an additional volume rejection criterion and b-matrix correction for large motion. The DTI image reconstruction procedures developed here were extremely robust in correcting for motion, failing on only three subjects, while providing the radiologists high-quality data for routine evaluation. This work suggests that, apart from the rare instance of continuous motion throughout the scan, high-quality DTI brain data can be acquired using our proposed integrated sequence and reconstruction that uses a retrospective approach to motion correction. In addition, we demonstrate a substantial improvement in overall image quality by combining phase correction with complex averaging, which reduces the Rician noise that biases noisy data. Copyright © 2012 Wiley Periodicals, Inc.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming
Cambra, Carlos; Lacuesta, Raquel
2018-01-01
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation. PMID:29693611
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming.
Cambra, Carlos; Sendra, Sandra; Lloret, Jaime; Lacuesta, Raquel
2018-04-25
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation.
NASA Astrophysics Data System (ADS)
Luo, Liancong; Hamilton, David; Lan, Jia; McBride, Chris; Trolle, Dennis
2018-03-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook autocalibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimizing the root-mean-square error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10 000 simulation iterations. The "optimal" temperature calibration produced a RMSE of 0.54 °C, Nr value of 0.99, and r value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 and 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr value was 0.75, and the r value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events from 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr value 0.62, and r value of 0.81, based on the available data set of 738 days. The autocalibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimization than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer
2014-01-01
Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.
Soyer, Philippe; Lagadec, Matthieu; Sirol, Marc; Dray, Xavier; Duchat, Florent; Vignaud, Alexandre; Fargeaudou, Yann; Placé, Vinciane; Gault, Valérie; Hamzi, Lounis; Pocard, Marc; Boudiaf, Mourad
2010-02-11
Our objective was to determine the diagnostic accuracy of a free-breathing diffusion-weighted single-shot echo-planar magnetic resonance imaging (FBDW-SSEPI) technique with parallel imaging and high diffusion factor value (b = 1000 s/mm2) in the detection of primary rectal adenocarcinomas. Thirty-one patients (14M and 17F; mean age 67 years) with histopathologically proven primary rectal adenocarcinomas and 31 patients without rectal malignancies (14M and 17F; mean age 63.6 years) were examined with FBDW-SSEPI (repetition time (TR/echo time (TE) 3900/91 ms, gradient strength 45 mT/m, acquisition time 2 min) at 1.5 T using generalized autocalibrating partially parallel acquisitions (GRAPPA, acceleration factor 2) and a b value of 1000 s/mm2. Apparent diffusion coefficients (ADCs) of rectal adenocarcinomas and normal rectal wall were measured. FBDW-SSEPI images were evaluated for tumour detection by 2 readers. Sensitivity, specificity, accuracy and Youden score for rectal adenocarcinoma detection were calculated with their 95% confidence intervals (CI) for ADC value measurement and visual image analysis. Rectal adenocarcinomas had significantly lower ADCs (mean 1.036 x 10(-3)+/- 0.107 x 10(-3) mm2/s; median 1.015 x 10(-3) mm2/s; range (0.827-1.239) x 10(-3) mm2/s) compared with the rectal wall of control subjects (mean 1.387 x 10(-3)+/- 0.106 x 10(-3) mm2/s; median 1.385 x 10(-3) mm2/s; range (1.176-1.612) x 10(-3) mm2/s) (p < 0.0001). Using a threshold value < or = 1.240 x 10(-3) mm2/s, all rectal adenocarcinomas were correctly categorized and 100% sensitivity (31/31; 95% CI 95-100%), 94% specificity (31/33; 95% CI 88-100%), 97% accuracy (60/62; 95% CI 92-100%) and Youden index 0.94 were obtained for the diagnosis of rectal adenocarcinoma. FBDW-SSEPI image analysis allowed depiction of all rectal adenocarcinomas but resulted in 2 false-positive findings, yielding 100% sensitivity (31/31; 95% CI 95-100%), 94% specificity (31/33; 95% CI 88-100%), 97% accuracy (60/62; 95% CI 92-100%) and Youden index 0.94 for the diagnosis of primary rectal adenocarcinoma. We can conclude that FBDW-SSEPI using parallel imaging and high b value may be helpful in the detection of primary rectal adenocarcinomas.
Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Paul, Dominik; Kannengiesser, Stephan; Schoenberg, Stefan O; Haneder, Stefan
2015-04-01
The purpose of this study was to evaluate the feasibility and technical quality of a zoomed three-dimensional (3D) turbo spin-echo (TSE) sampling perfection with application optimized contrasts using different flip-angle evolutions (SPACE) sequence of the lumbar spine. In this prospective feasibility study, nine volunteers underwent a 3-T magnetic resonance examination of the lumbar spine including 1) a conventional 3D T2-weighted (T2w) SPACE sequence with generalized autocalibrating partially parallel acquisition technique acceleration factor 2 and 2) a zoomed 3D T2w SPACE sequence with a reduced field of view (reduction factor 2). Images were evaluated with regard to image sharpness, signal homogeneity, and the presence of artifacts by two experienced radiologists. For quantitative analysis, signal-to-noise ratio (SNR) values were calculated. Image sharpness of anatomic structures was statistically significantly greater with zoomed SPACE (P < .0001), whereas the signal homogeneity was statistically significantly greater with conventional SPACE (cSPACE; P = .0003). There were no statistically significant differences in extent of artifacts. Acquisition times were 8:20 minutes for cSPACE and 6:30 minutes for zoomed SPACE. Readers 1 and 2 selected zSPACE as the preferred sequence in five of nine cases. In two of nine cases, both sequences were rated as equally preferred by both the readers. SNR values were statistically significantly greater with cSPACE. In comparison to a cSPACE sequences, zoomed SPACE imaging of the lumbar spine provides sharper images in conjunction with a 25% reduction in acquisition time. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Simultaneous multi-slice combined with PROPELLER.
Norbeck, Ola; Avventi, Enrico; Engström, Mathias; Rydén, Henric; Skare, Stefan
2018-08-01
Simultaneous multi-slice (SMS) imaging is an advantageous method for accelerating MRI scans, allowing reduced scan time, increased slice coverage, or high temporal resolution with limited image quality penalties. In this work we combine the advantages of SMS acceleration with the motion correction and artifact reduction capabilities of the PROPELLER technique. A PROPELLER sequence was developed with support for CAIPIRINHA and phase optimized multiband radio frequency pulses. To minimize the time spent on acquiring calibration data, both in-plane-generalized autocalibrating partial parallel acquisition (GRAPPA) and slice-GRAPPA weights for all PROPELLER blade angles were calibrated on a single fully sampled PROPELLER blade volume. Therefore, the proposed acquisition included a single fully sampled blade volume, with the remaining blades accelerated in both the phase and slice encoding directions without additional auto calibrating signal lines. Comparison to 3D RARE was performed as well as demonstration of 3D motion correction performance on the SMS PROPELLER data. We show that PROPELLER acquisitions can be efficiently accelerated with SMS using a short embedded calibration. The potential in combining these two techniques was demonstrated with a high quality 1.0 × 1.0 × 1.0 mm 3 resolution T 2 -weighted volume, free from banding artifacts, and capable of 3D retrospective motion correction, with higher effective resolution compared to 3D RARE. With the combination of SMS acceleration and PROPELLER imaging, thin-sliced reformattable T 2 -weighted image volumes with 3D retrospective motion correction capabilities can be rapidly acquired with low sensitivity to flow and head motion. Magn Reson Med 80:496-506, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward
2016-09-01
Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Bollache, Emilie; Barker, Alex J; Dolan, Ryan Scott; Carr, James C; van Ooij, Pim; Ahmadian, Rouzbeh; Powell, Alex; Collins, Jeremy D; Geiger, Julia; Markl, Michael
2018-01-01
To assess the performance of highly accelerated free-breathing aortic four-dimensional (4D) flow MRI acquired in under 2 minutes compared to conventional respiratory gated 4D flow. Eight k-t accelerated nongated 4D flow MRI (parallel MRI with extended and averaged generalized autocalibrating partially parallel acquisition kernels [PEAK GRAPPA], R = 5, TRes = 67.2 ms) using four k y -k z Cartesian sampling patterns (linear, center-out, out-center-out, random) and two spatial resolutions (SRes1 = 3.5 × 2.3 × 2.6 mm 3 , SRes2 = 4.5 × 2.3 × 2.6 mm 3 ) were compared in vitro (aortic coarctation flow phantom) and in 10 healthy volunteers, to conventional 4D flow (16 mm-navigator acceptance window; R = 2; TRes = 39.2 ms; SRes = 3.2 × 2.3 × 2.4 mm 3 ). The best k-t accelerated approach was further assessed in 10 patients with aortic disease. The k-t accelerated in vitro aortic peak flow (Qmax), net flow (Qnet), and peak velocity (Vmax) were lower than conventional 4D flow indices by ≤4.7%, ≤ 11%, and ≤22%, respectively. In vivo k-t accelerated acquisitions were significantly shorter but showed a trend to lower image quality compared to conventional 4D flow. Hemodynamic indices for linear and out-center-out k-space samplings were in agreement with conventional 4D flow (Qmax ≤ 13%, Qnet ≤ 13%, Vmax ≤ 17%, P > 0.05). Aortic 4D flow MRI in under 2 minutes is feasible with moderate underestimation of flow indices. Differences in k-space sampling patterns suggest an opportunity to mitigate image artifacts by an optimal trade-off between scan time, acceleration, and k-space sampling. Magn Reson Med 79:195-207, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan
2016-03-01
Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.
Auto-calibration of GF-1 WFV images using flat terrain
NASA Astrophysics Data System (ADS)
Zhang, Guo; Xu, Kai; Huang, Wenchao
2017-12-01
Four wide field view (WFV) cameras with 16-m multispectral medium-resolution and a combined swath of 800 km are onboard the Gaofen-1 (GF-1) satellite, which can increase the revisit frequency to less than 4 days and enable large-scale land monitoring. The detection and elimination of WFV camera distortions is key for subsequent applications. Due to the wide swath of WFV images, geometric calibration using either conventional methods based on the ground control field (GCF) or GCF independent methods is problematic. This is predominantly because current GCFs in China fail to cover the whole WFV image and most GCF independent methods are used for close-range photogrammetry or computer vision fields. This study proposes an auto-calibration method using flat terrain to detect nonlinear distortions of GF-1 WFV images. First, a classic geometric calibration model is built for the GF1 WFV camera, and at least two images with an overlap area that cover flat terrain are collected, then the elevation residuals between the real elevation and that calculated by forward intersection are used to solve nonlinear distortion parameters in WFV images. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixel, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of auto-calibration, and the whole scene is undistorted compared to not using calibration parameters. The orientation accuracy of the proposed method and the GCF method is compared. The maximum difference is approximately 0.3 pixel, and the factors behind this discrepancy are analyzed. Generally, this method can effectively compensate for distortions in the GF-1 WFV camera.
Faller, Josef; Scherer, Reinhold; Friedrich, Elisabeth V. C.; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.
2014-01-01
Individuals with severe motor impairment can use event-related desynchronization (ERD) based BCIs as assistive technology. Auto-calibrating and adaptive ERD-based BCIs that users control with motor imagery tasks (“SMR-AdBCI”) have proven effective for healthy users. We aim to find an improved configuration of such an adaptive ERD-based BCI for individuals with severe motor impairment as a result of spinal cord injury (SCI) or stroke. We hypothesized that an adaptive ERD-based BCI, that automatically selects a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (“Auto-AdBCI”) could allow for higher control performance than a conventional SMR-AdBCI. To answer this question we performed offline analyses on two sessions (21 data sets total) of cue-guided, five-class electroencephalography (EEG) data recorded from individuals with SCI or stroke. On data from the twelve individuals in Session 1, we first identified three bipolar derivations for the SMR-AdBCI. In a similar way, we determined three bipolar derivations and four mental tasks for the Auto-AdBCI. We then simulated both, the SMR-AdBCI and the Auto-AdBCI configuration on the unseen data from the nine participants in Session 2 and compared the results. On the unseen data of Session 2 from individuals with SCI or stroke, we found that automatically selecting a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (Auto-AdBCI) significantly (p < 0.01) improved classification performance compared to an adaptive ERD-based BCI that only used motor imagery tasks (SMR-AdBCI; average accuracy of 75.7 vs. 66.3%). PMID:25368546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreibmann, E; Shu, H; Cordova, J
Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less
Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O
2013-09-01
The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis. Furthermore, this approach demonstrated higher interobserver and intraobserver reliability as well as a better time efficiency when compared to visual analysis.
Rocket measurement of auroral partial parallel distribution functions
NASA Astrophysics Data System (ADS)
Lin, C.-A.
1980-01-01
The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.
USDA-ARS?s Scientific Manuscript database
The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
NASA Technical Reports Server (NTRS)
Toomarian, N.; Fijany, A.; Barhen, J.
1993-01-01
Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.
NASA Astrophysics Data System (ADS)
Kistenev, Yury V.; Borisov, Alexey V.; Kuzmin, Dmitry A.; Bulanova, Anna A.
2016-08-01
Technique of exhaled breath sampling is discussed. The procedure of wavelength auto-calibration is proposed and tested. Comparison of the experimental data with the model absorption spectra of 5% CO2 is conducted. The classification results of three study groups obtained by using support vector machine and principal component analysis methods are presented.
Polarization Imaging Apparatus with Auto-Calibration
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)
2013-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.
Polarization imaging apparatus with auto-calibration
Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui
2013-08-20
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
Hangel, Gilbert; Strasser, Bernhard; Považan, Michal; Gruber, Stephan; Chmelík, Marek; Gajdošík, Martin; Trattnig, Siegfried
2015-01-01
This work presents a new approach for high‐resolution MRSI of the brain at 7 T in clinically feasible measurement times. Two major problems of MRSI are the long scan times for large matrix sizes and the possible spectral contamination by the transcranial lipid signal. We propose a combination of free induction decay (FID)‐MRSI with a short acquisition delay and acceleration via in‐plane two‐dimensional generalised autocalibrating partially parallel acquisition (2D‐GRAPPA) with adiabatic double inversion recovery (IR)‐based lipid suppression to allow robust high‐resolution MRSI. We performed Bloch simulations to evaluate the magnetisation pathways of lipids and metabolites, and compared the results with phantom measurements. Acceleration factors in the range 2–25 were tested in a phantom. Five volunteers were scanned to verify the value of our MRSI method in vivo. GRAPPA artefacts that cause fold‐in of transcranial lipids were suppressed via double IR, with a non‐selective symmetric frequency sweep. The use of long, low‐power inversion pulses (100 ms) reduced specific absorption rate requirements. The symmetric frequency sweep over both pulses provided good lipid suppression (>90%), in addition to a reduced loss in metabolite signal‐to‐noise ratio (SNR), compared with conventional IR suppression (52–70%). The metabolic mapping over the whole brain slice was not limited to a rectangular region of interest. 2D‐GRAPPA provided acceleration up to a factor of nine for in vivo FID‐MRSI without a substantial increase in g‐factors (<1.1). A 64 × 64 matrix can be acquired with a common repetition time of ~1.3 s in only 8 min without lipid artefacts caused by acceleration. Overall, we present a fast and robust MRSI method, using combined double IR fat suppression and 2D‐GRAPPA acceleration, which may be used in (pre)clinical studies of the brain at 7 T. © 2015 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd. PMID:26370781
Hangel, Gilbert; Strasser, Bernhard; Považan, Michal; Gruber, Stephan; Chmelík, Marek; Gajdošík, Martin; Trattnig, Siegfried; Bogner, Wolfgang
2015-11-01
This work presents a new approach for high-resolution MRSI of the brain at 7 T in clinically feasible measurement times. Two major problems of MRSI are the long scan times for large matrix sizes and the possible spectral contamination by the transcranial lipid signal. We propose a combination of free induction decay (FID)-MRSI with a short acquisition delay and acceleration via in-plane two-dimensional generalised autocalibrating partially parallel acquisition (2D-GRAPPA) with adiabatic double inversion recovery (IR)-based lipid suppression to allow robust high-resolution MRSI. We performed Bloch simulations to evaluate the magnetisation pathways of lipids and metabolites, and compared the results with phantom measurements. Acceleration factors in the range 2-25 were tested in a phantom. Five volunteers were scanned to verify the value of our MRSI method in vivo. GRAPPA artefacts that cause fold-in of transcranial lipids were suppressed via double IR, with a non-selective symmetric frequency sweep. The use of long, low-power inversion pulses (100 ms) reduced specific absorption rate requirements. The symmetric frequency sweep over both pulses provided good lipid suppression (>90%), in addition to a reduced loss in metabolite signal-to-noise ratio (SNR), compared with conventional IR suppression (52-70%). The metabolic mapping over the whole brain slice was not limited to a rectangular region of interest. 2D-GRAPPA provided acceleration up to a factor of nine for in vivo FID-MRSI without a substantial increase in g-factors (<1.1). A 64 × 64 matrix can be acquired with a common repetition time of ~1.3 s in only 8 min without lipid artefacts caused by acceleration. Overall, we present a fast and robust MRSI method, using combined double IR fat suppression and 2D-GRAPPA acceleration, which may be used in (pre)clinical studies of the brain at 7 T. © 2015 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Nguyen, Howard; Willacy, Karen; Allen, Mark
2012-01-01
KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
NASA Astrophysics Data System (ADS)
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
Combined Dynamic Contrast Enhanced Liver MRI and MRA Using Interleaved Variable Density Sampling
Rahimi, Mahdi Salmani; Korosec, Frank R.; Wang, Kang; Holmes, James H.; Motosugi, Utaroh; Bannas, Peter; Reeder, Scott B.
2014-01-01
Purpose To develop and evaluate a method for volumetric contrast-enhanced MR imaging of the liver, with high spatial and temporal resolutions, for combined dynamic imaging and MR angiography using a single injection of contrast. Methods An interleaved variable density (IVD) undersampling pattern was implemented in combination with a real-time-triggered, time-resolved, dual-echo 3D spoiled gradient echo sequence. Parallel imaging autocalibration lines were acquired only once during the first time-frame. Imaging was performed in ten subjects with focal nodular hyperplasia (FNH) and compared with their clinical MRI. The angiographic phase of the proposed method was compared to a dedicated MR angiogram acquired during a second injection of contrast. Results A total of 21 FNH, 3 cavernous hemangiomas, and 109 arterial segments were visualized in 10 subjects. The temporally-resolved images depicted the characteristic arterial enhancement pattern of the lesions with a 4 s update rate. Images were graded as having significantly higher quality compared to the clinical MRI. Angiograms produced from the IVD method provided non-inferior diagnostic assessment compared to the dedicated MRA. Conclusion Using an undersampled IVD imaging method, we have demonstrated the feasibility of obtaining high spatial and temporal resolution dynamic contrast-enhanced imaging and simultaneous MRA of the liver. PMID:24639130
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.
2016-01-01
Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836
Automated response matching for organic scintillation detector arrays
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.
2017-07-01
This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.
Algorithm for automatic analysis of electro-oculographic data
2013-01-01
Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372
Algorithm for automatic analysis of electro-oculographic data.
Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti
2013-10-25
Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.
Online PH measurement technique in seawater desalination
NASA Astrophysics Data System (ADS)
Wang, Haibo; Wu, Kaihua; Hu, Shaopeng
2009-11-01
The measurement technology of pH is essential in seawater desalination. Glass electrode is the main pH sensor in seawater desalination. Because the internal impedance of glass electrode is high and the signal of pH sensor is easy to be disturbed, a signal processing circuit with high input impedance was designed. Because of high salinity of seawater and the characteristic of glass electrode, ultrasonic cleaning technology was used to online clean pH sensor. Temperature compensation was also designed to reduce the measurement error caused by variety of environment temperature. Additionally, the potential drift of pH sensor was analyzed and an automatic calibration method was proposed. In order to online monitor the variety of pH in seawater desalination, three operating modes were designed. The three modes are online monitoring mode, ultrasonic cleaning mode and auto-calibration mode. The current pH in seawater desalination was measured and displayed in online monitoring mode. The cleaning process of pH sensor was done in ultrasonic cleaning mode. The calibration of pH sensor was finished in auto-calibration mode. The result of experiments showed that the measurement technology of pH could meet the technical requirements for desalination. The glass electrode could be promptly and online cleaned and its service life was lengthened greatly.
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P
2017-03-01
Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
2011-02-01
Methods of Measurement All subjects were instrumented with 3 Nonin pulse oximeter sensors ( Nonin Medical, Plymouth, MN; OEM III module, 16-bit data...ring finger of the left hand. Unlike standard pulse oximeters that have autocalibration capability, the Nonin pulse oximeter did not alter the raw...stroke volume and are therefore presented as percentage change from baseline levels. The PPG and Spo2 values from the Nonin pulse oxime- ter sensors were
Solving Partial Differential Equations in a data-driven multiprocessor environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.
1988-12-31
Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
InGaAs/InP SPAD photon-counting module with auto-calibrated gate-width generation and remote control
NASA Astrophysics Data System (ADS)
Tosi, Alberto; Ruggeri, Alessandro; Bahgat Shehata, Andrea; Della Frera, Adriano; Scarcella, Carmelo; Tisa, Simone; Giudice, Andrea
2013-01-01
We present a photon-counting module based on InGaAs/InP SPAD (Single-Photon Avalanche Diode) for detecting single photons up to 1.7 μm. The module exploits a novel architecture for generating and calibrating the gate width, along with other functions (such as module supervision, counting and processing of detected photons, etc.). The gate width, i.e. the time interval when the SPAD is ON, is user-programmable in the range from 500 ps to 1.5 μs, by means of two different delay generation methods implemented with an FPGA (Field-Programmable Gate Array). In order to compensate chip-to-chip delay variation, an auto-calibration circuit picks out a combination of delays in order to match at best the selected gate width. The InGaAs/InP module accepts asynchronous and aperiodic signals and introduces very low timing jitter. Moreover the photon counting module provides other new features like a microprocessor for system supervision, a touch-screen for local user interface, and an Ethernet link for smart remote control. Thanks to the fullyprogrammable and configurable architecture, the overall instrument provides high system flexibility and can easily match all requirements set by many different applications requiring single photon-level sensitivity in the near infrared with very low photon timing jitter.
Radial k-t SPIRiT: autocalibrated parallel imaging for generalized phase-contrast MRI.
Santelli, Claudio; Schaeffter, Tobias; Kozerke, Sebastian
2014-11-01
To extend SPIRiT to additionally exploit temporal correlations for highly accelerated generalized phase-contrast MRI and to compare the performance of the proposed radial k-t SPIRiT method relative to frame-by-frame SPIRiT and radial k-t GRAPPA reconstruction for velocity and turbulence mapping in the aortic arch. Free-breathing navigator-gated two-dimensional radial cine imaging with three-directional multi-point velocity encoding was implemented and fully sampled data were obtained in the aortic arch of healthy volunteers. Velocities were encoded with three different first gradient moments per axis to permit quantification of mean velocity and turbulent kinetic energy. Velocity and turbulent kinetic energy maps from up to 14-fold undersampled data were compared for k-t SPIRiT, frame-by-frame SPIRiT, and k-t GRAPPA relative to the fully sampled reference. Using k-t SPIRiT, improvements in magnitude and velocity reconstruction accuracy were found. Temporally resolved magnitude profiles revealed a reduction in spatial blurring with k-t SPIRiT compared with frame-by-frame SPIRiT and k-t GRAPPA for all velocity encodings, leading to improved estimates of turbulent kinetic energy. k-t SPIRiT offers improved reconstruction accuracy at high radial undersampling factors and hence facilitates the use of generalized phase-contrast MRI for routine use. Copyright © 2013 Wiley Periodicals, Inc.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A
2016-06-01
To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Efficient Implementation of Multigrid Solvers on Message-Passing Parrallel Systems
NASA Technical Reports Server (NTRS)
Lou, John
1994-01-01
We discuss our implementation strategies for finite difference multigrid partial differential equation (PDE) solvers on message-passing systems. Our target parallel architecture is Intel parallel computers: the Delta and Paragon system.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
NASA Astrophysics Data System (ADS)
Tobin, K. J.; Bennett, M. E.
2017-12-01
Over the last decade autocalibration routines have become commonplace in watershed modeling. This approach is most often used to simulate a streamflow at a basin's outlet. In alpine settings spring/early summer snowmelt is by far the dominant signal in this system. Therefore, there is great potential for a modeled watershed to underperform during other times of the year. This tendency has been noted in many prior studies. In this work, the Soil and Water Assessment Tool (SWAT) model was autocalibrated with the SUFI-2 routine. Two mountainous watersheds from Idaho and Utah were examined. In this study, the basins were calibrated on a monthly satellite based on the MODIS 16A2 product. The gridded MODIS product was ideally suited to derive an estimate of ET on a subbasin basis. Soil moisture data was derived from extrapolation of in situ sites from the SNOwpack TELemetry (SNOTEL) network. Previous work has indicated that in situ soil moisture can be applied to derive an estimate at a significant distance (>30 km) away from the in situ site. Optimized ET and soil moisture parameter values were then applied to streamflow simulations. Preliminary results indicate improved streamflow performance both during calibration (2005-2011) and validation (2012-2014) periods. Streamflow performance was monitored with not only standard objective metrics (bias and Nash Sutcliffe coefficients) but also improved baseflow accuracy, demonstrating the utility of this approach in improving watershed modeling fidelity outside the main snowmelt season.
Gooding, Thomas Michael; McCarthy, Patrick Joseph
2010-03-02
A data collector for a massively parallel computer system obtains call-return stack traceback data for multiple nodes by retrieving partial call-return stack traceback data from each node, grouping the nodes in subsets according to the partial traceback data, and obtaining further call-return stack traceback data from a representative node or nodes of each subset. Preferably, the partial data is a respective instruction address from each node, nodes having identical instruction address being grouped together in the same subset. Preferably, a single node of each subset is chosen and full stack traceback data is retrieved from the call-return stack within the chosen node.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2017-09-01
In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.
S-191 sensor performance evaluation
NASA Technical Reports Server (NTRS)
Hughes, C. L.
1975-01-01
A final analysis was performed on the Skylab S-191 spectrometer data received from missions SL-2, SL-3, and SL-4. The repeatability and accuracy of the S-191 spectroradiometric internal calibration was determined by correlation to the output obtained from well-defined external targets. These included targets on the moon and earth as well as deep space. In addition, the accuracy of the S-191 short wavelength autocalibration was flight checked by correlation of the earth resources experimental package S-191 outputs and the Backup Unit S-191 outputs after viewing selected targets on the moon.
Preliminary flight prototype silver ion monitoring system
NASA Technical Reports Server (NTRS)
Brady, J.
1974-01-01
The design, fabrication, and testing of a preliminary flight prototype silver ion monitoring system based on potentiometric principles and utilizing a solid-state silver sulfide electrode paired with a pressurized double-junction reference electrode housing a replaceable electrolyte reservoir is described. The design provides automatic electronic calibration utilizing saturated silver bromide solution as a silver ion standard. The problem of loss of silver ion from recirculating fluid, its cause, and corrective procedures are reported. The instability of the silver sulfide electrode is discussed as well as difficulties met in implementing the autocalibration procedure.
International Space Station Columbus Payload SoLACES Degradation Assessment
NASA Technical Reports Server (NTRS)
Hartman, William A.; Schmidl, William D.; Mikatarian, Ron; Soares, Carlos; Schmidtke, Gerhard; Erhardt, Christian
2016-01-01
SOLAR is a European Space Agency (ESA) payload deployed on the International Space Station (ISS) and located on the Columbus Laboratory. It is located on the Columbus External Payload Facility in a zenith location. The objective of the SOLAR payload is to study the Sun. The SOLAR payload consists of three instruments that allow for measurement of virtually the entire electromagnetic spectrum (17 nm to 2900 nm). The three payload instruments are SOVIM (SOlar Variable and Irradiance Monitor), SOLSPEC (SOLar SPECctral Irradiance measurements), and SolACES (SOLar Auto-Calibrating Extreme UV/UV Spectrophotometers).
Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.
Modulated heat pulse propagation and partial transport barriers in chaotic magnetic fields
del-Castillo-Negrete, Diego; Blazevski, Daniel
2016-04-01
Direct numerical simulations of the time dependent parallel heat transport equation modeling heat pulses driven by power modulation in 3-dimensional chaotic magnetic fields are presented. The numerical method is based on the Fourier formulation of a Lagrangian-Green's function method that provides an accurate and efficient technique for the solution of the parallel heat transport equation in the presence of harmonic power modulation. The numerical results presented provide conclusive evidence that even in the absence of magnetic flux surfaces, chaotic magnetic field configurations with intermediate levels of stochasticity exhibit transport barriers to modulated heat pulse propagation. In particular, high-order islands and remnants of destroyed flux surfaces (Cantori) act as partial barriers that slow down or even stop the propagation of heat waves at places where the magnetic field connection length exhibits a strong gradient. The key parameter ismore » $$\\gamma=\\sqrt{\\omega/2 \\chi_\\parallel}$$ that determines the length scale, $$1/\\gamma$$, of the heat wave penetration along the magnetic field line. For large perturbation frequencies, $$\\omega \\gg 1$$, or small parallel thermal conductivities, $$\\chi_\\parallel \\ll 1$$, parallel heat transport is strongly damped and the magnetic field partial barriers act as robust barriers where the heat wave amplitude vanishes and its phase speed slows down to a halt. On the other hand, in the limit of small $$\\gamma$$, parallel heat transport is largely unimpeded, global transport is observed and the radial amplitude and phase speed of the heat wave remain finite. Results on modulated heat pulse propagation in fully stochastic fields and across magnetic islands are also presented. In qualitative agreement with recent experiments in LHD and DIII-D, it is shown that the elliptic (O) and hyperbolic (X) points of magnetic islands have a direct impact on the spatio-temporal dependence of the amplitude and the time delay of modulated heat pulses.« less
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.
Solution of partial differential equations on vector and parallel computers
NASA Technical Reports Server (NTRS)
Ortega, J. M.; Voigt, R. G.
1985-01-01
The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Roth, Sebastian; Fox, Henrik; Fuchs, Uwe; Schulz, Uwe; Costard-Jäckle, Angelika; Gummert, Jan F; Horstkotte, Dieter; Oldenburg, Olaf; Bitter, Thomas
2018-05-01
Determination of cardiac output (CO) is essential in diagnosis and management of heart failure (HF). The gold standard to obtain CO is invasive assessment via thermodilution (TD). Noninvasive pulse contour analysis (NPCA) is supposed as a new method of CO determination. However, a validation of this method in HF is pending and performed in the present study. Patients with chronic-stable HF and reduced left ventricular ejection fraction (LVEF ≤ 45%; HF-REF) underwent right heart catheterization including TD. NPCA using the CNAP Monitor (V5.2.14, CNSystems Medizintechnik AG) was performed simultaneously. Three standardized TD measurements were compared with simultaneous auto-calibrated NPCA CO measurements. In total, 84 consecutive HF-REF patients were enrolled prospectively in this study. In 4 patients (5%), TD was not successful and for 22 patients (26%, 18 with left ventricular assist device), no NPCA signal could be obtained. For the remaining 58 patients, Bland-Altman analysis revealed a mean bias of + 1.92 L/min (limits of agreement ± 2.28 L/min, percentage error 47.4%) for CO. With decreasing cardiac index, as determined by the gold standard of TD, there was an increasing gap between CO values obtained by TD and NPCA (r = - 0.75, p < 0.001), resulting in a systematic overestimation of CO in more severe HF. TD-CI classified 52 (90%) patients to have a reduced CI (< 2.5 L/min/m 2 ), while NPCA documented a reduced CI in 18 patients (31%) only. In HF-REF patients, auto-calibrated NPCA systematically overestimates CO with decrease in cardiac function. Therefore, to date, NPCA cannot be recommended in this cohort.
Tegeler, Charles H; Tegeler, Catherine L; Cook, Jared F; Lee, Sung W; Pajewski, Nicholas M
2015-06-01
Increased amplitudes in high-frequency brain electrical activity are reported with menopausal hot flashes. We report outcomes associated with the use of High-resolution, relational, resonance-based, electroencephalic mirroring--a noninvasive neurotechnology for autocalibration of neural oscillations--by women with perimenopausal and postmenopausal hot flashes. Twelve women with hot flashes (median age, 56 y; range, 46-69 y) underwent a median of 13 (range, 8-23) intervention sessions for a median of 9.5 days (range, 4-32). This intervention uses algorithmic analysis of brain electrical activity and near real-time translation of brain frequencies into variable tones for acoustic stimulation. Hot flash frequency and severity were recorded by daily diary. Primary outcomes included hot flash severity score, sleep, and depressive symptoms. High-frequency amplitudes (23-36 Hz) from bilateral temporal scalp recordings were measured at baseline and during serial sessions. Self-reported symptom inventories for sleep and depressive symptoms were collected. The median change in hot flash severity score was -0.97 (range, -3.00 to 1.00; P = 0.015). Sleep and depression scores decreased by -8.5 points (range, -20 to -1; P = 0.022) and -5.5 points (range, -32 to 8; P = 0.015), respectively. The median sum of amplitudes for the right and left temporal high-frequency brain electrical activity was 8.44 μV (range, 6.27-16.66) at baseline and decreased by a median of -2.96 μV (range, -11.05 to -0.65; P = 0.0005) by the final session. Hot flash frequency and severity, symptoms of insomnia and depression, and temporal high-frequency brain electrical activity decrease after High-resolution, relational, resonance-based, electroencephalic mirroring. Larger controlled trials with longer follow-up are warranted.
Autocalibration of multiprojector CAVE-like immersive environments.
Sajadi, Behzad; Majumder, Aditi
2012-03-01
In this paper, we present the first method for the geometric autocalibration of multiple projectors on a set of CAVE-like immersive display surfaces including truncated domes and 4 or 5-wall CAVEs (three side walls, floor, and/or ceiling). All such surfaces can be categorized as swept surfaces and multiple projectors can be registered on them using a single uncalibrated camera without using any physical markers on the surface. Our method can also handle nonlinear distortion in the projectors, common in compact setups where a short throw lens is mounted on each projector. Further, when the whole swept surface is not visible from a single camera view, we can register the projectors using multiple pan and tilted views of the same camera. Thus, our method scales well with different size and resolution of the display. Since we recover the 3D shape of the display, we can achieve registration that is correct from any arbitrary viewpoint appropriate for head-tracked single-user virtual reality systems. We can also achieve wallpapered registration, more appropriate for multiuser collaborative explorations. Though much more immersive than common surfaces like planes and cylinders, general swept surfaces are used today only for niche display environments. Even the more popular 4 or 5-wall CAVE is treated as a piecewise planar surface for calibration purposes and hence projectors are not allowed to be overlapped across the corners. Our method opens up the possibility of using such swept surfaces to create more immersive VR systems without compromising the simplicity of having a completely automatic calibration technique. Such calibration allows completely arbitrary positioning of the projectors in a 5-wall CAVE, without respecting the corners.
NASA Astrophysics Data System (ADS)
Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.
2016-02-01
Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
NASA Astrophysics Data System (ADS)
Sheikhnejad, Yahya; Hosseini, Reza; Saffar Avval, Majid
2017-02-01
In this study, steady state laminar ferroconvection through circular horizontal tube partially filled with porous media under constant heat flux is experimentally investigated. Transverse magnetic fields were applied on ferrofluid flow by two fixed parallel magnet bar positioned on a certain distance from beginning of the test section. The results show promising notable enhancement in heat transfer as a consequence of partially filled porous media and magnetic field, up to 2.2 and 1.4 fold enhancement were observed in heat transfer coefficient respectively. It was found that presence of both porous media and magnetic field simultaneously can highly improve heat transfer up to 2.4 fold. Porous media of course plays a major role in this configuration. Virtually, application of Magnetic field and porous media also insert higher pressure loss along the pipe which again porous media contribution is higher that magnetic field.
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
Implementation of DFT application on ternary optical computer
NASA Astrophysics Data System (ADS)
Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei
2018-03-01
As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.
Newton-like methods for Navier-Stokes solution
NASA Astrophysics Data System (ADS)
Qin, N.; Xu, X.; Richards, B. E.
1992-12-01
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
NASA Astrophysics Data System (ADS)
Herrera, I.; Herrera, G. S.
2015-12-01
Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...
2016-01-26
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Parallel-In-Time For Moving Meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Manteuffel, T. A.; Southworth, B.
2016-02-04
With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less
Holographic Associative Memory Employing Phase Conjugation
NASA Astrophysics Data System (ADS)
Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.
1986-12-01
The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Villarreal, Ramiro
1987-01-01
System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.
Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Benson, Thomas R.; Lee, Chak Shing
ParELAG is a parallel C++ library for numerical upscaling of finite element discretizations and element-based algebraic multigrid solvers. It provides optimal complexity algorithms to build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured meshes. Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.
NASA Astrophysics Data System (ADS)
Dan, C.; Morar, R.
2017-05-01
Working methods for on site testing of insulations: Gas chromatography (using the TFGA-P200 chromatographer); Electrical measurements of partial discharge levels using the digital detection, recording, analysis and partial discharge acquisition system, MPD600. First performed, between 2000-2015, were the chromatographic analyses concerning electrical insulating environments of: 102 current transformers, 110kV. Items in operation, functioning in 110/20kV substations. 38 voltage transformers, 110kV also in operation, functioning in 110/20kV substations. Then, electrical measurements of partial discharge inside instrument transformers, on site (power substations) were made (starting in the year 2009, over a 7-year period, collecting data until the year 2015) according to the provisions of standard EN 61869-1:2007 „Instrument transformers. General requirements”, applying, assimilated to it, type A partial discharge test procedure, using as test voltage the very rated 110kV distribution grid voltage. Given the results of two parallel measurements, containing: to this type of failure specific gas amount (H 2) and the quantitative partial discharge’ level, establishing a clear dependence between the quantity of partial discharges and the type and amount of in oil dissolved gases inside equipments affected by this type of defect: partial discharges, was expected. Of the „population” of instrument transformers subject of the two parallel measurements, the dependency between Q IEC (apparent charge) and (H 2) (hydrogen, gas amount dissolved within their insulating environment) represents a finite assemblage situated between the two limits developed on an empirical basis.
A 64-channel ultra-low power system-on-chip for local field and action potentials recording
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Alberto; Delgado-Restituto, Manuel; Darie, Angela; Soto-Sánchez, Cristina; Fernández-Jover, Eduardo; Rodríguez-Vázquez, Ángel
2015-06-01
This paper reports an integrated 64-channel neural recording sensor. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration mechanism which configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by an embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330μW.
Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions
NASA Astrophysics Data System (ADS)
Buddala, Santhoshi Snigdha
Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.
2016-12-01
New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.
Scheduling for Locality in Shared-Memory Multiprocessors
1993-05-01
Submitted in Partial Fulfillment of the Requirements for the Degree ’)iIC Q(JALfryT INSPECTED 5 DOCTOR OF PHILOSOPHY I Accesion For Supervised by NTIS CRAM... architecture on parallel program performance, explain the implications of this trend on popular parallel programming models, and propose system software to 0...decomoosition and scheduling algorithms. I. SUIUECT TERMS IS. NUMBER OF PAGES shared-memory multiprocessors; architecture trends; loop 110 scheduling
Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.
In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less
Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram
NASA Astrophysics Data System (ADS)
Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.
1987-01-01
The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
A Co-Adaptive Brain-Computer Interface for End Users with Severe Motor Impairment
Faller, Josef; Scherer, Reinhold; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.
2014-01-01
Co-adaptive training paradigms for event-related desynchronization (ERD) based brain-computer interfaces (BCI) have proven effective for healthy users. As of yet, it is not clear whether co-adaptive training paradigms can also benefit users with severe motor impairment. The primary goal of our paper was to evaluate a novel cue-guided, co-adaptive BCI training paradigm with severely impaired volunteers. The co-adaptive BCI supports a non-control state, which is an important step toward intuitive, self-paced control. A secondary aim was to have the same participants operate a specifically designed self-paced BCI training paradigm based on the auto-calibrated classifier. The co-adaptive BCI analyzed the electroencephalogram from three bipolar derivations (C3, Cz, and C4) online, while the 22 end users alternately performed right hand movement imagery (MI), left hand MI and relax with eyes open (non-control state). After less than five minutes, the BCI auto-calibrated and proceeded to provide visual feedback for the MI task that could be classified better against the non-control state. The BCI continued to regularly recalibrate. In every calibration step, the system performed trial-based outlier rejection and trained a linear discriminant analysis classifier based on one auto-selected logarithmic band-power feature. In 24 minutes of training, the co-adaptive BCI worked significantly (p = 0.01) better than chance for 18 of 22 end users. The self-paced BCI training paradigm worked significantly (p = 0.01) better than chance in 11 of 20 end users. The presented co-adaptive BCI complements existing approaches in that it supports a non-control state, requires very little setup time, requires no BCI expert and works online based on only two electrodes. The preliminary results from the self-paced BCI paradigm compare favorably to previous studies and the collected data will allow to further improve self-paced BCI systems for disabled users. PMID:25014055
Electric currents and voltage drops along auroral field lines
NASA Technical Reports Server (NTRS)
Stern, D. P.
1983-01-01
An assessment is presented of the current state of knowledge concerning Birkeland currents and the parallel electric field, with discussions focusing on the Birkeland primary region 1 sheets, the region 2 sheets which parallel them and appear to close in the partial ring current, the cusp currents (which may be correlated with the interplanetary B(y) component), and the Harang filament. The energy required by the parallel electric field and the associated particle acceleration processes appears to be derived from the Birkeland currents, for which evidence is adduced from particles, inverted V spectra, rising ion beams and expanded loss cones. Conics may on the other hand signify acceleration by electrostatic ion cyclotron waves associated with beams accelerated by the parallel electric field.
Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian
2009-09-01
Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field strengths. At 1.5 T the peak signal-to-noise ratio of the ERS was significantly lower in comparison to the IRS and the VS technique (14.6 vs. 26.7 and 39.6 respectively, P < 0.004). Using the IRS sampling algorithm comparable image quality and SNR can be achieved at 1.5 and 3 T. At 1.5 T VS offers the best possible solution for the conflicting requirements between a further increased temporal resolution and image quality. In consequence the gain of increased scanning efficiency from advanced k[r]-space sampling acquisition techniques can be exploited for a further improvement of image quality of pulmonary perfusion MRI.
Fault Tolerant Parallel Implementations of Iterative Algorithms for Optimal Control Problems
1988-01-21
p/.V)] steps, but did not discuss any specific parallel implementation. Gajski [51 improved upon this result by performing the SIMD computation in...N = p2. our approach reduces to that of [51, except that Gajski presents the coefficient computation and partial solution phases as a single...8217>. the SIMD algo- rithm presented by Gajski [5] can be most efficiently mapped to a unidirec- tional ring network with broadcasting capability. Based
Plasma Generator Using Spiral Conductors
NASA Technical Reports Server (NTRS)
Szatkowski, George N. (Inventor); Dudley, Kenneth L. (Inventor); Ticatch, Larry A. (Inventor); Smith, Laura J. (Inventor); Koppen, Sandra V. (Inventor); Nguyen, Truong X. (Inventor); Ely, Jay J. (Inventor)
2016-01-01
A plasma generator includes a pair of identical spiraled electrical conductors separated by dielectric material. Both spiraled conductors have inductance and capacitance wherein, in the presence of a time-varying electromagnetic field, the spiraled conductors resonate to generate a harmonic electromagnetic field response. The spiraled conductors lie in parallel planes and partially overlap one another in a direction perpendicular to the parallel planes. The geometric centers of the spiraled conductors define endpoints of a line that is non-perpendicular with respect to the parallel planes. A voltage source coupled across the spiraled conductors applies a voltage sufficient to generate a plasma in at least a portion of the dielectric material.
O'Connor, B P
2000-08-01
Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.
NASA Astrophysics Data System (ADS)
Sasikala, R.; Govindarajan, A.; Gayathri, R.
2018-04-01
This paper focus on the result of dust particle between two parallel plates through porous medium in the presence of magnetic field with constant suction in the upper plate and constant injection in the lower plate. The partial differential equations governing the flow are solved by similarity transformation. The velocity of the fluid and the dust particle decreases when there is an increase in the Hartmann number.
Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite
Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai
2013-04-01
The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Calcagno, Cristina; Coppo, Mario
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327
Parallel approach to incorporating face image information into dialogue processing
NASA Astrophysics Data System (ADS)
Ren, Fuji
2000-10-01
There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
Two improved coherent optical feedback systems for optical information processing
NASA Technical Reports Server (NTRS)
Lee, S. H.; Bartholomew, B.; Cederquist, J.
1976-01-01
Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.
Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring
NASA Technical Reports Server (NTRS)
Padovan, Joe; Kwang, Abel
1994-01-01
This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.
Chen, Tianbao; Gagliardo, Ron; Walker, Brian; Zhou, Mei; Shaw, Chris
2005-12-01
Phylloxin is a novel prototype antimicrobial peptide from the skin of Phyllomedusa bicolor. Here, we describe parallel identification and sequencing of phylloxin precursor transcript (mRNA) and partial gene structure (genomic DNA) from the same sample of lyophilized skin secretion using our recently-described cloning technique. The open-reading frame of the phylloxin precursor was identical in nucleotide sequence to that previously reported and alignment with the nucleotide sequence derived from genomic DNA indicated the presence of a 175 bp intron located in a near identical position to that found in the dermaseptins. The highly-conserved structural organization of skin secretion peptide genes in P. bicolor can thus be extended to include that encoding phylloxin (plx). These data further reinforce our assertion that application of the described methodology can provide robust genomic/transcriptomic/peptidomic data without the need for specimen sacrifice.
Ng, K L; Chan, H L; Choy, C L
2000-01-01
Composites of lead zirconate titanate (PZT) powder dispersed in a vinylidene fluoride-trifluoroethylene copolymer [P(VDF-TrFE)] matrix have been prepared by compression molding. Three groups of polarized samples have been prepared by poling: only the ceramic phase, the ceramic and polymer phases in parallel directions, and the two phases in antiparallel directions. The measured permittivities of the unpoled composites are consistent with the predictions of the Bruggeman model. The changes in the pyroelectric and piezoelectric coefficients of the poled composites with increasing ceramic volume fraction can be described by modified linear mixture rules. When the ceramic and copolymer phases are poled in the same direction, their pyroelectric activities reinforce while their piezoelectric activities partially cancel. However, when the ceramic and copolymer phases are poled in opposite directions, their piezoelectric activities reinforce while their pyroelectric activities partially cancel.
Motions of the hand expose the partial and parallel activation of stereotypes.
Freeman, Jonathan B; Ambady, Nalini
2009-10-01
Perceivers spontaneously sort other people's faces into social categories and activate the stereotype knowledge associated with those categories. In the work described here, participants, presented with sex-typical and sex-atypical faces (i.e., faces containing a mixture of male and female features), identified which of two gender stereotypes (one masculine and one feminine) was appropriate for the face. Meanwhile, their hand movements were measured by recording the streaming x, y coordinates of the computer mouse. As participants stereotyped sex-atypical faces, real-time motor responses exhibited a continuous spatial attraction toward the opposite-gender stereotype. These data provide evidence for the partial and parallel activation of stereotypes belonging to alternate social categories. Thus, perceptual cues of the face can trigger a graded mixture of simultaneously active stereotype knowledge tied to alternate social categories, and this mixture settles over time onto ultimate judgments.
1985-11-18
Greenberg and K. Sakallah at Digital Equipment Corporation, and C-F. Chen, L Nagel, and P. ,. Subrahmanyam at AT&T Bell Laboratories, both for providing...Circuit Theory McGraw-Hill, 1969. [37] R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics...McGraw-Hill, N.Y., 1965. Page 161 [44) R. Courant and D. Hilbert , Partial Differential Equations, Vol. 2 of Methods of Mathematical Physics
MPF: A portable message passing facility for shared memory multiprocessors
NASA Technical Reports Server (NTRS)
Malony, Allen D.; Reed, Daniel A.; Mcguire, Patrick J.
1987-01-01
The design, implementation, and performance evaluation of a message passing facility (MPF) for shared memory multiprocessors are presented. The MPF is based on a message passing model conceptually similar to conversations. Participants (parallel processors) can enter or leave a conversation at any time. The message passing primitives for this model are implemented as a portable library of C function calls. The MPF is currently operational on a Sequent Balance 21000, and several parallel applications were developed and tested. Several simple benchmark programs are presented to establish interprocess communication performance for common patterns of interprocess communication. Finally, performance figures are presented for two parallel applications, linear systems solution, and iterative solution of partial differential equations.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Simulation of partially coherent light propagation using parallel computing devices
NASA Astrophysics Data System (ADS)
Magalhães, Tiago C.; Rebordão, José M.
2017-08-01
Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.
Lesser-Rojas, Leonardo; Sriram, K. K.; Liao, Kuo-Tang; Lai, Shui-Chin; Kuo, Pai-Chia; Chu, Ming-Lee; Chou, Chia-Fu
2014-01-01
We have developed a two-step electron-beam lithography process to fabricate a tandem array of three pairs of tip-like gold nanoelectronic detectors with electrode gap size as small as 9 nm, embedded in a coplanar fashion to 60 nm deep, 100 nm wide, and up to 150 μm long nanochannels coupled to a world-micro-nanofluidic interface for easy sample introduction. Experimental tests with a sealed device using DNA-protein complexes demonstrate the coplanarity of the nanoelectrodes to the nanochannel surface. Further, this device could improve transverse current detection by correlated time-of-flight measurements of translocating samples, and serve as an autocalibrated velocimeter and nanoscale tandem Coulter counters for single molecule analysis of heterogeneous samples. PMID:24753731
Histogram analysis for smartphone-based rapid hematocrit determination
Jalal, Uddin M.; Kim, Sang C.; Shim, Joon S.
2017-01-01
A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based “Histogram” app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases. PMID:28717569
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less
Auto calibration of a cone-beam-CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, Daniel; Heil, Ulrich; Schulze, Ralf
2012-10-15
Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to demonstrate the achievable spatial resolution of their calibration procedure. Results: Compared to the results published in the most closely related work [K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)], the simulation proved the greater accuracy of their method, as well as a lower standard deviation of roughly 1 order of magnitude. When compared to another similar approach [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004)], their results were roughly of the same order of accuracy. Their analysis revealed that the method is capable of sufficiently calibrating out-of-plane angles in cases of larger cone angles when neglecting these angles negatively affects the reconstruction. Fine details in the 3D reconstruction of the spine segment and an electronic device indicate a high geometric calibration accuracy and the capability to produce state-of-the-art reconstructions. Conclusions: The method introduced here makes no requirements on the accuracy of the test object. In contrast to many previous autocalibration methods their approach also includes out-of-plane rotations of the detector. Although assuming a perfect rotation, the method seems to be sufficiently accurate for a commercial CBCT scanner. For devices which require higher dimensional geometry models, the method could be used as a initial calibration procedure.« less
Poudel, Lokendra; Steinmetz, Nicole F; French, Roger H; Parsegian, V Adrian; Podgornik, Rudolf; Ching, Wai-Yim
2016-08-03
We present a first-principles density functional study elucidating the effects of solvent, metal ions and topology on the electronic structure and hydrogen bonding of 12 well-designed three dimensional G-quadruplex (G4-DNA) models in different environments. Our study shows that the parallel strand structures are more stable in dry environments and aqueous solutions containing K(+) ions within the tetrad of guanine but conversely, that the anti-parallel structure is more stable in solutions containing the Na(+) ions within the tetrad of guanine. The presence of metal ions within the tetrad of the guanine channel always enhances the stability of the G4-DNA models. The parallel strand structures have larger HOMO-LUMO gaps than antiparallel structures, which are in the range of 0.98 eV to 3.11 eV. Partial charge calculations show that sugar and alkali ions are positively charged whereas nucleobases, PO4 groups and water molecules are all negatively charged. Partial charges on each functional group with different signs and magnitudes contribute differently to the electrostatic interactions involving G4-DNA and favor the parallel structure. A comparative study between specific pairs of different G4-DNA models shows that the Hoogsteen OH and NH hydrogen bonds in the guanine tetrad are significantly influenced by the presence of metal ions and water molecules, collectively affecting the structure and the stability of G4-DNA.
Tailoring of the partial magnonic gap in three-dimensional magnetoferritin-based magnonic crystals
NASA Astrophysics Data System (ADS)
Mamica, S.
2013-07-01
We investigate theoretically the use of magnetoferritin nanoparticles, self-assembled in the protein crystallization process, as the basis for the realization of 3D magnonic crystals in which the interparticle space is filled with a ferromagnetic material. Using the plane wave method we study the dependence of the width of the partial band gap and its central frequency on the total magnetic moment of the magnetoferritin core and the lattice constant of the magnetoferritin crystal. We show that by adjusting the combination of these two parameters the partial gap can be tailored in a wide frequency range and shifted to sub-terahertz frequencies. Moreover, the difference in the width of the partial gap for spin waves propagating in planes parallel and perpendicular to the external field allows for switching on and off the partial magnonic gap by changing the direction of the applied field.
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
Picher, Maria M; Küpcü, Seta; Huang, Chun-Jen; Dostalek, Jakub; Pum, Dietmar; Sleytr, Uwe B; Ertl, Peter
2013-05-07
In the current work we have developed a lab-on-a-chip containing embedded amperometric sensors in four microreactors that can be addressed individually and that are coated with crystalline surface protein monolayers to provide a continuous, stable, reliable and accurate detection of blood glucose. It is envisioned that the microfluidic device will be used in a feedback loop mechanism to assess natural variations in blood glucose levels during hemodialysis to allow the individual adjustment of glucose. Reliable and accurate detection of blood glucose is accomplished by simultaneously performing (a) blood glucose measurements, (b) autocalibration routines, (c) mediator-interferences detection, and (d) background subtractions. The electrochemical detection of blood glucose variations in the absence of electrode fouling events is performed by integrating crystalline surface layer proteins (S-layer) that function as an efficient antifouling coating, a highly-oriented immobilization matrix for biomolecules and an effective molecular sieve with pore sizes of 4 to 5 nm. We demonstrate that the S-layer protein SbpA (from Lysinibacillus sphaericus CCM 2177) readily forms monomolecular lattice structures at the various microchip surfaces (e.g. glass, PDMS, platinum and gold) within 60 min, eliminating unspecific adsorption events in the presence of human serum albumin, human plasma and freshly-drawn blood samples. The highly isoporous SbpA-coating allows undisturbed diffusion of the mediator between the electrode surface, thus enabling bioelectrochemical measurements of glucose concentrations between 500 μM to 50 mM (calibration slope δI/δc of 8.7 nA mM(-1)). Final proof-of-concept implementing the four microfluidic microreactor design is demonstrated using freshly drawn blood. Accurate and drift-free assessment of blood glucose concentrations (6. 4 mM) is accomplished over 130 min at 37 °C using immobilized enzyme glucose oxidase by calculating the difference between autocalibration (10 mM glc) and background measurements. The novel combination of biologically-derived nanostructured surfaces with microchip technology constitutes a powerful new tool for multiplexed analysis of complex samples.
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2016-09-01
Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lin, M; Sun, P; Zhang, G; Xu, X; Liu, G; Miao, H; Yang, Y; Xu, H; Zhang, L; Wu, P; Li, M
2014-03-01
Normal liver has a great potential of regenerative capacity after partial hepatectomy. In clinic, however, most patients receiving partial hepatectomy are usually suffering from chronic liver diseases with severely damaged hepatocyte population. Under these conditions, activation of hepatic progenitor cell (oval cell in rodents) population might be considered as an alternative mean to enhance liver functional recovery. Vitamin K2 has been shown to promote liver functional recovery in patients with liver cirrhosis. In this study, we explored the possibility of vitamin K2 treatment in activating hepatic oval cell for liver regeneration with the classic 2-acetamido-fluorene/partial hepatectomy (2-AAF/PH) model in Sprague-Dawley rats. In 2-AAF/PH animals, vitamin K2 treatment induced a dose-dependent increase of liver regeneration as assessed by the weight ratio of remnant liver versus whole body and by measuring serum albumin level. In parallel, a drastic expansion of oval cell population as assessed by anti-OV6 and anti-CK19 immunostaining was noticed in the periportal zone of the remnant liver. Since matrilin-2 was linked to oval cell proliferation and liver regeneration after partial hepatectomy, we assessed its expression at both the mRNA and protein levels. The results revealed a significant increase after vitamin K2 treatment in parallel with the expansion of oval cell population. Consistently, knocking down matrilin-2 expression in vivo largely reduced vitamin K2-induced liver regeneration and oval cell proliferation in 2-AAF/PH animals. In conclusion, these data suggest that vitamin K2 treatment enhances liver regeneration after partial hepatectomy, which is associated with oval cell expansion and matrilin-2 up-regulation.
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.
Drögemüller, Cord; Tetens, Jens; Sigurdsson, Snaevar; Gentile, Arcangelo; Testoni, Stefania; Lindblad-Toh, Kerstin; Leeb, Tosso
2010-01-01
Arachnomelia is a monogenic recessive defect of skeletal development in cattle. The causative mutation was previously mapped to a ∼7 Mb interval on chromosome 5. Here we show that array-based sequence capture and massively parallel sequencing technology, combined with the typical family structure in livestock populations, facilitates the identification of the causative mutation. We re-sequenced the entire critical interval in a healthy partially inbred cow carrying one copy of the critical chromosome segment in its ancestral state and one copy of the same segment with the arachnomelia mutation, and we detected a single heterozygous position. The genetic makeup of several partially inbred cattle provides extremely strong support for the causality of this mutation. The mutation represents a single base insertion leading to a premature stop codon in the coding sequence of the SUOX gene and is perfectly associated with the arachnomelia phenotype. Our findings suggest an important role for sulfite oxidase in bone development. PMID:20865119
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Fang, Ning; Sun, Wei
2015-04-21
A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.
System-Level Design of a 64-Channel Low Power Neural Spike Recording Sensor.
Delgado-Restituto, Manuel; Rodriguez-Perez, Alberto; Darie, Angela; Soto-Sanchez, Cristina; Fernandez-Jover, Eduardo; Rodriguez-Vazquez, Angel
2017-04-01
This paper reports an integrated 64-channel neural spike recording sensor, together with all the circuitry to process and configure the channels, process the neural data, transmit via a wireless link the information and receive the required instructions. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration algorithm which individually configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by the embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330 μW.
Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate
NASA Astrophysics Data System (ADS)
Sarwi, S.; Supardi, K. I.; Linuwih, S.
2017-04-01
The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory.
NASA Astrophysics Data System (ADS)
Tatsuura, Satoshi; Wada, Osamu; Furuki, Makoto; Tian, Minquan; Sato, Yasuhiro; Iwasa, Izumi; Pu, Lyong Sun
2001-04-01
In this study, we introduce a new concept of all-optical two-dimensional serial-to-parallel pulse converters. Femtosecond optical pulses can be understood as thin plates of light traveling in space. When a femtosecond signal-pulse train and a single gate pulse were fed onto a material with a finite incident angle, each signal-pulse plate met the gate-pulse plate at different locations in the material due to the time-of-flight effect. Meeting points can be made two-dimensional by adding a partial time delay to the gate pulse. By placing a nonlinear optical material at an appropriate position, two-dimensional serial-to-parallel conversion of a signal-pulse train can be achieved with a single gate pulse. We demonstrated the detection of parallel outputs from a 1-Tb/s optical-pulse train through the use of a BaB2O4 crystal. We also succeeded in demonstrating 1-Tb/s serial-to-parallel operation through the use of a novel organic nonlinear optical material, squarylium-dye J-aggregate film, which exhibits ultrafast recovery of bleached absorption.
Epitaxial relationship of semipolar s-plane (1101) InN grown on r-plane sapphire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrakopulos, G. P.
2012-07-02
The heteroepitaxy of semipolar s-plane (1101) InN grown directly on r-plane sapphire by plasma-assisted molecular beam epitaxy is studied using transmission electron microscopy techniques. The epitaxial relationship is determined to be (1101){sub InN} Parallel-To (1102){sub Al{sub 2O{sub 3}}}, [1120]{sub InN} Parallel-To [2021]{sub Al{sub 2O{sub 3}}}, [1102]{sub InN}{approx} Parallel-To [0221]{sub Al{sub 2O{sub 3}}}, which ensures a 0.7% misfit along [1120]{sub InN}. Two orientation variants are identified. Proposed geometrical factors contributing to the high density of basal stacking faults, partial dislocations, and sphalerite cubic pockets include the misfit accommodation and reduction, as well as the accommodation of lattice twist.
Multiscale Simulations of Magnetic Island Coalescence
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2010-01-01
We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...
2017-01-01
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
Costa-Font, Joan; Kanavos, Panos
2007-01-01
To examine the effects of parallel simvastatin importation on drug price in three of the main parallel importing countries in the European Union, namely the United Kingdom, Germany, and the Netherlands. To estimate the market share of parallel imported simvastatin and the unit price -both locally produced and parallel imported- adjusted by defined daily dose in the importing country and in the exporting country (Spain). Ordinary least squares regression was used to examine the potential price competition resulting from parallel drug trade between 1997 and 2002. The market share of parallel imported simvastatin progressively expanded (especially in the United Kingdom and Germany) in the period examined, although the price difference between parallel imported and locally sourced simvastatin was not significant. Prices tended to rise in the United Kingdom and Germany and declined in the Netherlands. We found no evidence of pro-competitive effects resulting from the expansion of parallel trade. The development of parallel drug importation in the European Union produced unexpected effects (limited competition) on prices that differ from those expected by the introduction of a new competitor. This is partially the result of drug price regulation scant incentives to competition and of the lack of transparency in the drug reimbursement system, especially due to the effect of informal discounts (not observable to researchers). The case of simvastatin reveals that savings to the health system from parallel trade are trivial. Finally, of the three countries examined, the only country that shows a moderate downward pattern in simvastatin prices is the Netherlands. This effect can be attributed to the existence of a system that claws back informal discounts.
Transport in a toroidally confined pure electron plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crooks, S.M.; ONeil, T.M.
1996-07-01
O{close_quote}Neil and Smith [T.M. O{close_quote}Neil and R.A. Smith, Phys. Plasmas {bold 1}, 8 (1994)] have argued that a pure electron plasma can be confined stably in a toroidal magnetic field configuration. This paper shows that the toroidal curvature of the magnetic field of necessity causes slow cross-field transport. The transport mechanism is similar to magnetic pumping and may be understood by considering a single flux tube of plasma. As the flux tube of plasma undergoes poloidal {ital E}{bold {times}}{ital B} drift rotation about the center of the plasma, the length of the flux tube and the magnetic field strength withinmore » the flux tube oscillate, and this produces corresponding oscillations in {ital T}{sub {parallel}} and {ital T}{sub {perpendicular}}. The collisional relaxation of {ital T}{sub {parallel}} toward {ital T}{sub {perpendicular}} produces a slow dissipation of electrostatic energy into heat and a consequent expansion (cross-field transport) of the plasma. In the limit where the cross section of the plasma is nearly circular the radial particle flux is given by {Gamma}{sub {ital r}}=1/2{nu}{sub {perpendicular},{parallel}}{ital T}({ital r}/{rho}{sub 0}){sup 2}{ital n}/({minus}{ital e}{partial_derivative}{Phi}/{partial_derivative}{ital r}), where {nu}{sub {perpendicular},{parallel}} is the collisional equipartition rate, {rho}{sub 0} is the major radius at the center of the plasma, and {ital r} is the minor radius measured from the center of the plasma. The transport flux is first calculated using this simple physical picture and then is calculated by solving the drift-kinetic Boltzmann equation. This latter calculation is not limited to a plasma with a circular cross section. {copyright} {ital 1996 American Institute of Physics.}« less
A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging
Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.
2012-01-01
Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065
The characteristics and limitations of the MPS/MMS battery charging system
NASA Technical Reports Server (NTRS)
Ford, F. E.; Palandati, C. F.; Davis, J. F.; Tasevoli, C. M.
1980-01-01
A series of tests was conducted on two 12 ampere hour nickel cadmium batteries under a simulated cycle regime using the multiple voltage versus temperature levels designed into the modular power system (MPS). These tests included: battery recharge as a function of voltage control level; temperature imbalance between two parallel batteries; a shorted or partially shorted cell in one of the two parallel batteries; impedance imbalance of one of the parallel battery circuits; and disabling and enabling one of the batteries from the bus at various charge and discharge states. The results demonstrate that the eight commandable voltage versus temperature levels designed into the MPS provide a very flexible system that not only can accommodate a wide range of normal power system operation, but also provides a high degree of flexibility in responding to abnormal operating conditions.
A Multiscale Parallel Computing Architecture for Automated Segmentation of the Brain Connectome
Knobe, Kathleen; Newton, Ryan R.; Schlimbach, Frank; Blower, Melanie; Reid, R. Clay
2015-01-01
Several groups in neurobiology have embarked into deciphering the brain circuitry using large-scale imaging of a mouse brain and manual tracing of the connections between neurons. Creating a graph of the brain circuitry, also called a connectome, could have a huge impact on the understanding of neurodegenerative diseases such as Alzheimer’s disease. Although considerably smaller than a human brain, a mouse brain already exhibits one billion connections and manually tracing the connectome of a mouse brain can only be achieved partially. This paper proposes to scale up the tracing by using automated image segmentation and a parallel computing approach designed for domain experts. We explain the design decisions behind our parallel approach and we present our results for the segmentation of the vasculature and the cell nuclei, which have been obtained without any manual intervention. PMID:21926011
Extendability of parallel sections in vector bundles
NASA Astrophysics Data System (ADS)
Kirschner, Tim
2016-01-01
I address the following question: Given a differentiable manifold M, what are the open subsets U of M such that, for all vector bundles E over M and all linear connections ∇ on E, any ∇-parallel section in E defined on U extends to a ∇-parallel section in E defined on M? For simply connected manifolds M (among others) I describe the entirety of all such sets U which are, in addition, the complement of a C1 submanifold, boundary allowed, of M. This delivers a partial positive answer to a problem posed by Antonio J. Di Scala and Gianni Manno (2014). Furthermore, in case M is an open submanifold of Rn, n ≥ 2, I prove that the complement of U in M, not required to be a submanifold now, can have arbitrarily large n-dimensional Lebesgue measure.
Partial Arc Curvilinear Direct Drive Servomotor
NASA Technical Reports Server (NTRS)
Sun, Xiuhong (Inventor)
2014-01-01
A partial arc servomotor assembly having a curvilinear U-channel with two parallel rare earth permanent magnet plates facing each other and a pivoted ironless three phase coil armature winding moves between the plates. An encoder read head is fixed to a mounting plate above the coil armature winding and a curvilinear encoder scale is curved to be co-axis with the curvilinear U-channel permanent magnet track formed by the permanent magnet plates. Driven by a set of miniaturized power electronics devices closely looped with a positioning feedback encoder, the angular position and velocity of the pivoted payload is programmable and precisely controlled.
Kinematic sensitivity of robot manipulators
NASA Technical Reports Server (NTRS)
Vuskovic, Marko I.
1989-01-01
Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.
Jolivalt, C G; Lee, C A; Beiswenger, K K; Smith, J L; Orlov, M; Torrance, M A; Masliah, E
2008-11-15
We have evaluated the effect of peripheral insulin deficiency on brain insulin pathway activity in a mouse model of type 1 diabetes, the parallels with Alzheimer's disease (AD), and the effect of treatment with insulin. Nine weeks of insulin-deficient diabetes significantly impaired the learning capacity of mice, significantly reduced insulin-degrading enzyme protein expression, and significantly reduced phosphorylation of the insulin-receptor and AKT. Phosphorylation of glycogen synthase kinase-3 (GSK3) was also significantly decreased, indicating increased GSK3 activity. This evidence of reduced insulin signaling was associated with a concomitant increase in tau phosphorylation and amyloid beta protein levels. Changes in phosphorylation levels of insulin receptor, GSK3, and tau were not observed in the brain of db/db mice, a model of type 2 diabetes, after a similar duration (8 weeks) of diabetes. Treatment with insulin from onset of diabetes partially restored the phosphorylation of insulin receptor and of GSK3, partially reduced the level of phosphorylated tau in the brain, and partially improved learning ability in insulin-deficient diabetic mice. Our data indicate that mice with systemic insulin deficiency display evidence of reduced insulin signaling pathway activity in the brain that is associated with biochemical and behavioral features of AD and that it can be corrected by insulin treatment.
Enhanced Scattering of Diffuse Ions on Front of the Earth's Quasi-Parallel Bow Shock: a Case Study
NASA Astrophysics Data System (ADS)
Kis, A.; Matsukiyo, S.; Otsuka, F.; Hada, T.; Lemperger, I.; Dandouras, I. S.; Barta, V.; Facsko, G. I.
2017-12-01
In the analysis we present a case study of three energetic upstream ion events at the Earth's quasi-parallel bow shock based on multi-spacecraft data recorded by Cluster. The CIS-HIA instrument onboard Cluster provides partial energetic ion densities in 4 energy channels between 10 and 32 keV.The difference of the partial ion densities recorded by the individual spacecraft at various distances from the bow shock surface makes possible the determination of the spatial gradient of energetic ions.Using the gradient values we determined the spatial profile of the energetic ion partial densities as a function of distance from the bow shock and we calculated the e-folding distance and the diffusion coefficient for each event and each ion energy range. Results show that in two cases the scattering of diffuse ions takes place in a normal way, as "by the book", and the e-folding distance and diffusion coefficient values are comparable with previous results. On the other hand, in the third case the e-folding distance and the diffusion coefficient values are significantly lower, which suggests that in this case the scattering process -and therefore the diffusive shock acceleration (DSA) mechanism also- is much more efficient. Our analysis provides an explanation for this "enhanced" scattering process recorded in the third case.
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
Smart measurement system for resistive (bridge) or capacitive sensors
NASA Astrophysics Data System (ADS)
Wang, Guijie; Meijer, Gerard C. M.
1998-07-01
A low-cost smart measurement system for resistive (bridge) and capacitive sensors is presented and demonstrated. The measurement system consists of three main parts: the sensor element, a universal transducer interface (UTI) and a microcontroller. The UTI is a sensor-signal-to-time converter, based on a period-modulated oscillator, which is equipped with front-ends for many types of resistive (bridge) and capacitive sensors, and which generates a microcontroller-compatible output signal. The microcontroller performs data acquisition of the output signals from the interface UTI, controls the working status of the UTI for a specified application and communicates with a personal computer. Continuous auto-calibration of the offset and the gain of the complete system is applied to eliminate many nonidealities. Experimental results show that the accuracy and resolution are 14 bits and 16 bits, respectively, for a measurement time of about 100 ms.
Multi-projector auto-calibration and placement optimization for non-planar surfaces
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong
2015-10-01
Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.
Chapter 6: CPV Tracking and Trackers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luque-Heredia, Ignacio; Magalhaes, Pedro; Muller, Matthew
2016-04-15
This chapter explains the functional requirements of a concentrator photovoltaic (CPV) sun tracker. It derives the design specifications of a CPV tracker. The chapter presents taxonomy of trackers describing the most common tracking architectures, based on the number of axes, their relative position, and the foundation and placing of tracking drives. It deals with the structural issues related to tracker design, mainly related to structural flexure and its impact on the system's acceptance angle. The chapter analyzes the auto-calibrated sun tracking control, by describing the state of the art and its development background. It explores the sun tracking accuracy measurementmore » with a practical example. The chapter discusses tracker manufacturing and tracker field works. It reviews survey of different types of tracker designs obtained from different manufacturers. Finally, the chapter deals with IEC62817, the technical standard developed for CPV sun trackers.« less
Large boron--epoxy filament-wound pressure vessels
NASA Technical Reports Server (NTRS)
Jensen, W. M.; Bailey, R. L.; Knoell, A. C.
1973-01-01
Advanced composite material used to fabricate pressure vessel is prepeg (partially cured) consisting of continuous, parallel boron filaments in epoxy resin matrix arranged to form tape. To fabricate chamber, tape is wound on form which must be removable after composite has been cured. Configuration of boron--epoxy composite pressure vessel was determined by computer program.
ERIC Educational Resources Information Center
Kittredge, Kevin W.; Marine, Susan S.; Taylor, Richard T.
2004-01-01
A molecule possessing other functional groups that could be hydrogenerated is examined, where a variety of metal catalysts are evaluated under similar reaction conditions. Optimizing organic reactions is both time and labor intensive, and the use of a combinatorial parallel synthesis reactor was great time saving device, as per summary.
Carpet: Adaptive Mesh Refinement for the Cactus Framework
NASA Astrophysics Data System (ADS)
Schnetter, Erik; Hawley, Scott; Hawke, Ian
2016-11-01
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
NASA Astrophysics Data System (ADS)
Kumari, Komal; Donzis, Diego
2017-11-01
Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
The role of single immediate loading implant in long Class IV Kennedy mandibular partial denture.
Mohamed, Gehan F; El Sawy, Amal A
2012-10-01
The treatment of long-span Kennedy class IV considers a prosthodontic challenge. This study evaluated the integrity of principle abutments in long Kennedy class IV clinically and radiographically, when rehabilitated with conventional metallic partial denture as a control group and mandibular partial overdentures supported with single immediately loaded implant in symphyseal as a study group. Twelve male patients were divided randomly allotted into two equal groups. First group patients received removable metallic partial denture, whereas in the second group, patients received partial overdentures supported with single immediately loaded implant in symphyseal region. The partial dentures design in both groups was the same. Long-cone paralleling technique and transmission densitometer were used at the time of denture insertion, 3, 6, and 12 months. Gingival index, bone loss, and optical density were measured for principle abutments during the follow-up. A significant reduction in bone loss and density were detected in group II comparing with group I. Gingival index had no significant change (p-value < 0.05). A single symphyseal implant in long span class IV Kennedy can play a pivotal role to improve the integrity of the principle abutments and alveolar bone support. © 2010 Wiley Periodicals, Inc.
Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana
2016-10-26
It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (-500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO 2max ) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η ² = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η ² = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η ² = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η ² = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program.
Tegeler, Charles H; Cook, Jared F; Tegeler, Catherine L; Hirsch, Joshua R; Shaltout, Hossam A; Simpson, Sean L; Fidali, Brian C; Gerdes, Lee; Lee, Sung W
2017-04-19
The objective of this pilot study was to explore the use of a closed-loop, allostatic, acoustic stimulation neurotechnology for individuals with self-reported symptoms of post-traumatic stress, as a potential means to impact symptomatology, temporal lobe high frequency asymmetry, heart rate variability (HRV), and baroreflex sensitivity (BRS). From a cohort of individuals participating in a naturalistic study to evaluate use of allostatic neurotechnology for diverse clinical conditions, a subset was identified who reported high scores on the Posttraumatic Stress Disorder Checklist (PCL). The intervention entailed a series of sessions wherein brain electrical activity was monitored noninvasively at high spectral resolutions, with software algorithms translating selected brain frequencies into acoustic stimuli (audible tones) that were delivered back to the user in real time, to support auto-calibration of neural oscillations. Participants completed symptom inventories before and after the intervention, and a subset underwent short-term blood pressure recordings for HRV and BRS. Changes in temporal lobe high frequency asymmetry were analyzed from baseline assessment through the first four sessions, and for the last four sessions. Nineteen individuals (mean age 47, 11 women) were enrolled, and the majority also reported symptom scores that exceeded inventory thresholds for depression. They undertook a median of 16 sessions over 16.5 days, and 18 completed the number of sessions recommended. After the intervention, 89% of the completers reported clinically significant decreases in post-traumatic stress symptoms, indicated by a change of at least 10 points on the PCL. At a group level, individuals with either rightward (n = 7) or leftward (n = 7) dominant baseline asymmetry in temporal lobe high frequency (23-36 Hz) activity demonstrated statistically significant reductions in their asymmetry scores over the course of their first four sessions. For 12 individuals who underwent short-term blood pressure recordings, there were statistically significant increases in HRV in the time domain and BRS (Sequence Up). There were no adverse events. Closed-loop, allostatic neurotechnology for auto-calibration of neural oscillations appears promising as an innovative therapeutic strategy for individuals with symptoms of post-traumatic stress. ClinicalTrials.gov #NCT02709369 , retrospectively registered on March 4, 2016.
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
Tolerant (parallel) Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Bailey, David H. (Technical Monitor)
1997-01-01
In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.
Acoustooptic linear algebra processors - Architectures, algorithms, and applications
NASA Technical Reports Server (NTRS)
Casasent, D.
1984-01-01
Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
On the interrelation of multiplication and division in secondary school children.
Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph
2013-01-01
Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types.
NASA Astrophysics Data System (ADS)
Wendel, D. E.; Olson, D. K.; Hesse, M.; Karimabadi, H.; Daughton, W. S.
2013-12-01
We investigate the distribution of parallel electric fields and their relationship to the location and rate of magnetic reconnection of a large particle-in-cell simulation of 3D turbulent magnetic reconnection with open boundary conditions. The simulation's guide field geometry inhibits the formation of topological features such as separators and null points. Therefore, we derive the location of potential changes in magnetic connectivity by finding the field lines that experience a large relative change between their endpoints, i.e., the quasi-separatrix layer. We find a correspondence between the locus of changes in magnetic connectivity, or the quasi-separatrix layer, and the map of large gradients in the integrated parallel electric field (or quasi-potential). Furthermore, we compare the distribution of parallel electric fields along field lines with the reconnection rate. We find the reconnection rate is controlled by only the low-amplitude, zeroth and first-order trends in the parallel electric field, while the contribution from high amplitude parallel fluctuations, such as electron holes, is negligible. The results impact the determination of reconnection sites within models of 3D turbulent reconnection as well as the inference of reconnection rates from in situ spacecraft measurements. It is difficult through direct observation to isolate the locus of the reconnection parallel electric field amidst the large amplitude fluctuations. However, we demonstrate that a positive slope of the partial sum of the parallel electric field along the field line as a function of field line length indicates where reconnection is occurring along the field line.
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
NASA Technical Reports Server (NTRS)
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
Lu, Joann J.; Wang, Shili; Li, Guanbin; Wang, Wei; Pu, Qiaosheng; Liu, Shaorong
2012-01-01
In this report, we introduce a chip-capillary hybrid device to integrate capillary isoelectric focusing (CIEF) with parallel capillary sodium dodecyl sulfate – polyacrylamide gel electrophoresis (SDS-PAGE) or capillary gel electrophoresis (CGE) toward automating two-dimensional (2D) protein separations. The hybrid device consists of three chips that are butted together. The middle chip can be moved between two positions to re-route the fluidic paths, which enables the performance of CIEF and injection of proteins partially resolved by CIEF to CGE capillaries for parallel CGE separations in a continuous and automated fashion. Capillaries are attached to the other two chips to facilitate CIEF and CGE separations and to extend the effective lengths of CGE columns. Specifically, we illustrate the working principle of the hybrid device, develop protocols for producing and preparing the hybrid device, and demonstrate the feasibility of using this hybrid device for automated injection of CIEF-separated sample to parallel CGE for 2D protein separations. Potentials and problems associated with the hybrid device are also discussed. PMID:22830584
Parallel evolution of early and late feathering in turkey and chicken, same gene, different mutation
USDA-ARS?s Scientific Manuscript database
The sex-linked slow (SF) and fast (FF) feathering rate at hatch has been widely used in poultry breeding for autosexing at hatching. In chicken, the sex-linked K (SF), and k+ (FF) alleles are responsible for the feathering rate phenotype in chicken. The K allele is dominant and a partial duplication...
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
1987-11-24
of the assortment of manufactured parts for partial and complete frames, as well as abutments , support walls, and bridgehead construction...Uniform Series II Generation based on anticipated spans; and • Increased effectiveness of prefabrication for steel and masonry bridge construction...support structures and abutments . Parallel to and on an equal par with standard primary construction trades already cited, the scientific-technical
a Non-Overlapping Discretization Method for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Rosas-Medina, A.; Herrera, I.
2013-05-01
Mathematical models of many systems of interest, including very important continuous systems of Engineering and Science, lead to a great variety of partial differential equations whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by engineering and scientific applications. The emergence of parallel computing prompted on the part of the computational-modeling community a continued and systematic effort with the purpose of harnessing it for the endeavor of solving boundary-value problems (BVPs) of partial differential equations. Very early after such an effort began, it was recognized that domain decomposition methods (DDM) were the most effective technique for applying parallel computing to the solution of partial differential equations, since such an approach drastically simplifies the coordination of the many processors that carry out the different tasks and also reduces very much the requirements of information-transmission between them. Ideally, DDMs intend producing algorithms that fulfill the DDM-paradigm; i.e., such that "the global solution is obtained by solving local problems defined separately in each subdomain of the coarse-mesh -or domain-decomposition-". Stated in a simplistic manner, the basic idea is that, when the DDM-paradigm is satisfied, full parallelization can be achieved by assigning each subdomain to a different processor. When intensive DDM research began much attention was given to overlapping DDMs, but soon after attention shifted to non-overlapping DDMs. This evolution seems natural when the DDM-paradigm is taken into account: it is easier to uncouple the local problems when the subdomains are separated. However, an important limitation of non-overlapping domain decompositions, as that concept is usually understood today, is that interface nodes are shared by two or more subdomains of the coarse-mesh and, therefore, even non-overlapping DDMs are actually overlapping when seen from the perspective of the nodes used in the discretization. In this talk we present and discuss a discretization method in which the nodes used are non-overlapping, in the sense that each one of them belongs to one and only one subdomain of the coarse-mesh.
Survey of the status of finite element methods for partial differential equations
NASA Technical Reports Server (NTRS)
Temam, Roger
1986-01-01
The finite element methods (FEM) have proved to be a powerful technique for the solution of boundary value problems associated with partial differential equations of either elliptic, parabolic, or hyperbolic type. They also have a good potential for utilization on parallel computers particularly in relation to the concept of domain decomposition. This report is intended as an introduction to the FEM for the nonspecialist. It contains a survey which is totally nonexhaustive, and it also contains as an illustration, a report on some new results concerning two specific applications, namely a free boundary fluid-structure interaction problem and the Euler equations for inviscid flows.
Methods and systems for monitoring a solid-liquid interface
Stoddard, Nathan G.; Clark, Roger F.; Kary, Tim
2010-07-20
Methods and systems are provided for monitoring a solid-liquid interface, including providing a vessel configured to contain an at least partially melted material; detecting radiation reflected from a surface of a liquid portion of the at least partially melted material that is parallel with the liquid surface; measuring a disturbance on the surface; calculating at least one frequency associated with the disturbance; and determining a thickness of the liquid portion based on the at least one frequency, wherein the thickness is calculated based on.times. ##EQU00001## where g is the gravitational constant, w is the horizontal width of the liquid, and f is the at least one frequency.
Time-partitioning simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.
1987-01-01
A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
NASA Astrophysics Data System (ADS)
Ma, Sangback
In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The results show that in general ILU(0) in the Multi-Color ordering ahd ILU(0) in the Wavefront ordering outperform the other methods but for symmetric and nearly symmetric 5-point matrices Multi-Color Block SOR gives the best performance, except for a few cases with a small number of processors.
The control of attentional target selection in a colour/colour conjunction task.
Berggren, Nick; Eimer, Martin
2016-11-01
To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors and became superadditive from approximately 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field.
Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei
2015-01-01
In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.
Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging
Truong, Trong-Kha; Song, Allen W.; Chen, Nan-kuei
2015-01-01
In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T 2 ∗-weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed. PMID:26413505
ERIC Educational Resources Information Center
Shattuck, James C.
2016-01-01
Organic chemistry is very challenging to many students pursuing science careers. Flipping the classroom presents an opportunity to significantly improve student success by increasing active learning, which research shows is highly beneficial to student learning. However, flipping an entire course may seem too daunting or an instructor may simply…
ERIC Educational Resources Information Center
Wilens, Timothy E.; Gault, Laura M.; Childress, Ann; Kratochvil, Christopher J.; Bensman, Lindsey; Hall, Coleen M.; Olson, Evelyn; Robieson, Weining Z.; Garimella, Tushar S.; Abi-Saab, Walid M.; Apostol, George; Saltarelli, Mario D.
2011-01-01
Objective: To assess the safety and efficacy of ABT-089, a novel alpha[subscript 4]beta[subscript 2] neuronal nicotinic receptor partial agonist, vs. placebo in children with attention-deficit/hyperactivity disorder (ADHD). Method: Two multicenter, randomized, double-blind, placebo-controlled, parallel-group studies of children 6 through 12 years…
Terry F. Strong; Ron M. Teclaw; John C. Zasada
1997-01-01
Silviculture modifies the environment. Past monitoring of silvicultural practices has been usually limited to vegetation responses, but parallel monitoring of the environment is needed to better predict these responses. In an example of monitoring temperatures in two studies of northern hardwood forests in Wisconsin, we found that different silvicultural practices...
ERIC Educational Resources Information Center
Goldberg, Gail Lynn
2014-01-01
This article provides a detailed account of a rubric revision process to address seven common problems to which rubrics are prone: lack of consistency and parallelism; the presence of "orphan" and "widow" words and phrases; redundancy in descriptors; inconsistency in the focus of qualifiers; limited routes to partial credit;…
Modulated heat pulse propagation and partial transport barriers in chaotic magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castillo-Negrete, Diego del; Blazevski, Daniel
2016-04-15
Direct numerical simulations of the time dependent parallel heat transport equation modeling heat pulses driven by power modulation in three-dimensional chaotic magnetic fields are presented. The numerical method is based on the Fourier formulation of a Lagrangian-Green's function method that provides an accurate and efficient technique for the solution of the parallel heat transport equation in the presence of harmonic power modulation. The numerical results presented provide conclusive evidence that even in the absence of magnetic flux surfaces, chaotic magnetic field configurations with intermediate levels of stochasticity exhibit transport barriers to modulated heat pulse propagation. In particular, high-order islands andmore » remnants of destroyed flux surfaces (Cantori) act as partial barriers that slow down or even stop the propagation of heat waves at places where the magnetic field connection length exhibits a strong gradient. Results on modulated heat pulse propagation in fully stochastic fields and across magnetic islands are also presented. In qualitative agreement with recent experiments in large helical device and DIII-D, it is shown that the elliptic (O) and hyperbolic (X) points of magnetic islands have a direct impact on the spatio-temporal dependence of the amplitude of modulated heat pulses.« less
Partially orthogonal resonators for magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Chacon-Caldera, Jorge; Malzacher, Matthias; Schad, Lothar R.
2017-02-01
Resonators for signal reception in magnetic resonance are traditionally planar to restrict coil material and avoid coil losses. Here, we present a novel concept to model resonators partially in a plane with maximum sensitivity to the magnetic resonance signal and partially in an orthogonal plane with reduced signal sensitivity. Thus, properties of individual elements in coil arrays can be modified to optimize physical planar space and increase the sensitivity of the overall array. A particular case of the concept is implemented to decrease H-field destructive interferences in planar concentric in-phase arrays. An increase in signal to noise ratio of approximately 20% was achieved with two resonators placed over approximately the same planar area compared to common approaches at a target depth of 10 cm at 3 Tesla. Improved parallel imaging performance of this configuration is also demonstrated. The concept can be further used to increase coil density.
Business model for sensor-based fall recognition systems.
Fachinger, Uwe; Schöpke, Birte
2014-01-01
AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.
A circuit-based photovoltaic module simulator with shadow and fault settings
NASA Astrophysics Data System (ADS)
Chao, Kuei-Hsiang; Chao, Yuan-Wei; Chen, Jyun-Ping
2016-03-01
The main purpose of this study was to develop a photovoltaic (PV) module simulator. The proposed simulator, using electrical parameters from solar cells, could simulate output characteristics not only during normal operational conditions, but also during conditions of partial shadow and fault conditions. Such a simulator should possess the advantages of low cost, small size and being easily realizable. Experiments have shown that results from a proposed PV simulator of this kind are very close to that from simulation software during partial shadow conditions, and with negligible differences during fault occurrence. Meanwhile, the PV module simulator, as developed, could be used on various types of series-parallel connections to form PV arrays, to conduct experiments on partial shadow and fault events occurring in some of the modules. Such experiments are designed to explore the impact of shadow and fault conditions on the output characteristics of the system as a whole.
Partial melting of amphibolite to trondhjemite at Nunatak Fiord, St. Elias Mountains, Alaska
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, F.; McLellan, E.L.; Plafker, G.
1985-01-01
At Nunatak Fiord, 55km NE of Yakutat, Alaska, a uniform layer of Cretaceous basalt ca. 3km thick was metamorphosed ca. 67 million years ago to amphibolite and locally partially melted to pegmatitic trondhjemite. Segregations of plagioclase-quartz+/-biotite rock, leucosomes in amphibolite matrix, range from stringers 5-10mm thick to blunt pods as thick as 6m. They tend to be parallel to foliation of the amphibolite, but crosscutting is common. The assemblage aluminous hornblende-plagioclase-epidote-sphene-quartz gave a hydrous melt that crystallized to plagioclase-quartz+/-biotite pegmatitic trondhjemite. 5-10% of the rock melted. Eu at 2x chondrites is positively anomalous. REE partitioning in melt/residum was controlled largelymore » by hornblende and sphene. Though the mineralogical variability precludes quantitative modeling, partial melting of garnet-free amphibolite to heavy-REE-depleted trondhjemitic melt is a viable process.« less
PetIGA: A framework for high-performance isogeometric analysis
Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...
2016-05-25
We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less
On the interrelation of multiplication and division in secondary school children
Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph
2013-01-01
Multiplication and division are conceptually inversely related: Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types. PMID:24133476
Combined fuel and air staged power generation system
Rabovitser, Iosif K; Pratapas, John M; Boulanov, Dmitri
2014-05-27
A method and apparatus for generation of electric power employing fuel and air staging in which a first stage gas turbine and a second stage partial oxidation gas turbine power operated in parallel. A first portion of fuel and oxidant are provided to the first stage gas turbine which generates a first portion of electric power and a hot oxidant. A second portion of fuel and oxidant are provided to the second stage partial oxidation gas turbine which generates a second portion of electric power and a hot syngas. The hot oxidant and the hot syngas are provided to a bottoming cycle employing a fuel-fired boiler by which a third portion of electric power is generated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lichtner, Peter C.; Hammond, Glenn E.; Lu, Chuan
PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Writtenmore » in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 2 32 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.« less
NASA Astrophysics Data System (ADS)
Manthilake, G.; Matsuzaki, T.; Yoshino, T.; Yamazaki, D.; Yoneda, A.; Ito, E.; Katsura, T.
2008-12-01
So far, two hypotheses have been proposed to explain softening of the oceanic asthenosphere allowing smooth motion of the oceanic lithosphere. One is partial melting, and the other is hydraulitic weakening. Although the hydraulitic weakening hypothesis is popular recently, Yoshino et al. [2006] suggested that this hypothesis cannot explain the high and anisotropic conductivity at the top of the asthenosphere near East Pacific Rise observed by Evans et al. [2005]. In order to explain the conductivity anisotropy over one order of magnitude by the partial melting hypothesis, we measured conductivity of partially molten peridotite analogue under shear conditions. The measured samples were mixtures of forsterite and chemically simplified basalt. The samples were pre- synthesized using a piston-cylinder apparatus at 1600 K and 2 GPa to obtain textural equilibrium. The pre- synthesized samples were formed to a disk with 3 mm in diameter and 1 mm in thickness. Conductivity measurement was carried out also at 1600 K and 2 GPa in a cubic-anvil apparatus with an additional uniaxial piston. The sample was sandwiched by two alumina pistons whose top was cut to 45 degree slope to generate shear. The shear strain rates of the sample were calibrated using a Mo strain marker in separate runs. The lower alumina piston was pushed by a tungsten carbide piston embedded in a bottom anvil with a constant speed. Conductivity was measured in the directions normal and parallel to the shear direction simultaneously. We mainly studied the sample with 1.6 volume percent of basaltic component. The shear strain rates were 0, 1.2x10(-6) and 5.2x10(-6) /s. The sample without shear did not show conductivity anisotropy. In contrast, the samples with shear showed one order of magnitude higher conductivity in the direction parallel to the shear than that normal to the shear. After the total strains reached 0.3, the magnitude of anisotropy became almost constant for both of the strain rates. The magnitude is thus independent of the strain rate. This study demonstrates that the anisotropy at the top of the asthenosphere can be explained based on the partially molten asthenosphere sheared by the plate motion.
NASA Astrophysics Data System (ADS)
Toporkov, D. M.; Vialcev, G. B.
2017-10-01
The implementation of parallel branches is a commonly used manufacturing method of the realizing of fractional slot concentrated windings in electrical machines. If the rotor eccentricity is enabled in a machine with parallel branches, the equalizing currents can arise. The simulation approach of the equalizing currents in parallel branches of an electrical machine winding based on magnetic field calculation by using Finite Elements Method is discussed in the paper. The high accuracy of the model is provided by the dynamic improvement of the inductances in the differential equation system describing a machine. The pre-computed table flux linkage functions are used for that. The functions are the dependences of the flux linkage of parallel branches on the branches currents and rotor position angle. The functions permit to calculate self-inductances and mutual inductances by partial derivative. The calculated results obtained for the electric machine specimen are presented. The results received show that the adverse combination of design solutions and the rotor eccentricity leads to a high value of the equalizing currents and windings heating. Additional torque ripples also arise. The additional ripples harmonic content is not similar to the cogging torque or ripples caused by the rotor eccentricity.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Lesoinne, Michel
1993-01-01
Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.
Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao V; Berndt, Markus; Coon, Ethan
2017-04-13
Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less
Gyrokinetic Magnetohydrodynamics and the Associated Equilibrium
NASA Astrophysics Data System (ADS)
Lee, W. W.; Hudson, S. R.; Ma, C. H.
2017-10-01
A proposed scheme for the calculations of gyrokinetic MHD and its associated equilibrium is discussed related a recent paper on the subject. The scheme is based on the time-dependent gyrokinetic vorticity equation and parallel Ohm's law, as well as the associated gyrokinetic Ampere's law. This set of equations, in terms of the electrostatic potential, ϕ, and the vector potential, ϕ , supports both spatially varying perpendicular and parallel pressure gradients and their associated currents. The MHD equilibrium can be reached when ϕ -> 0 and A becomes constant in time, which, in turn, gives ∇ . (J|| +J⊥) = 0 and the associated magnetic islands. Examples in simple cylindrical geometry will be given. The present work is partially supported by US DoE Grant DE-AC02-09CH11466.
Mundt, Torsten; Al Jaghsi, Ahmad; Schwahn, Bernd; Hilgert, Janina; Lucas, Christian; Biffar, Reiner; Schwahn, Christian; Heinemann, Friedhelm
2016-07-30
Acceptable short-term survival rates (>90 %) of mini-implants (diameter < 3.0 mm) are only documented for mandibular overdentures. Sound data for mini-implants as strategic abutments for a better retention of partial removable dental prosthesis (PRDP) are not available. The purpose of this study is to test the hypothesis that immediately loaded mini-implants show more bone loss and less success than strategic mini-implants with delayed loading. In this four-center (one university hospital, three dental practices in Germany), parallel-group, controlled clinical trial, which is cluster randomized on patient level, a total of 80 partially edentulous patients with unfavourable number and distribution of remaining abutment teeth in at least one jaw will receive supplementary min-implants to stabilize their PRDP. The mini-implant are either immediately loaded after implant placement (test group) or delayed after four months (control group). Follow-up of the patients will be performed for 36 months. The primary outcome is the radiographic bone level changes at implants. The secondary outcome is the implant success as a composite variable. Tertiary outcomes include clinical, subjective (quality of life, satisfaction, chewing ability) and dental or technical complications. Strategic implants under an existing PRDP are only documented for standard-diameter implants. Mini-implants could be a minimal invasive and low cost solution for this treatment modality. The trial is registered at Deutsches Register Klinischer Studien (German register of clinical trials) under DRKS-ID: DRKS00007589 ( www.germanctr.de ) on January 13(th), 2015.
Evaluation of “Autotune” calibration against manual calibration of building energy models
Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...
2016-08-26
Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less
Multidimensional bioseparation with modular microfluidics
Chirica, Gabriela S.; Renzi, Ronald F.
2013-08-27
A multidimensional chemical separation and analysis system is described including a prototyping platform and modular microfluidic components capable of rapid and convenient assembly, alteration and disassembly of numerous candidate separation systems. Partial or total computer control of the separation system is possible. Single or multiple alternative processing trains can be tested, optimized and/or run in parallel. Examples related to the separation and analysis of human bodily fluids are given.
Channel plate for DNA sequencing
Douthart, R.J.; Crowell, S.L.
1998-01-13
This invention is a channel plate that facilitates data compaction in DNA sequencing. The channel plate has a length, a width and a thickness, and further has a plurality of channels that are parallel. Each channel has a depth partially through the thickness of the channel plate. Additionally an interface edge permits electrical communication across an interface through a buffer to a deposition membrane surface. 15 figs.
PETSc Users Manual Revision 3.7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, Satish; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
PETSc Users Manual Revision 3.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
Veleba, Jiri; Matoulek, Martin; Hill, Martin; Pelikanova, Terezie; Kahleova, Hana
2016-01-01
It has been shown that it is possible to modify macronutrient oxidation, physical fitness and resting energy expenditure (REE) by changes in diet composition. Furthermore, mitochondrial oxidation can be significantly increased by a diet with a low glycemic index. The purpose of our trial was to compare the effects of a vegetarian (V) and conventional diet (C) with the same caloric restriction (−500 kcal/day) on physical fitness and REE after 12 weeks of diet plus aerobic exercise in 74 patients with type 2 diabetes (T2D). An open, parallel, randomized study design was used. All meals were provided for the whole study duration. An individualized exercise program was prescribed to the participants and was conducted under supervision. Physical fitness was measured by spiroergometry and indirect calorimetry was performed at the start and after 12 weeks Repeated-measures ANOVA (Analysis of variance) models with between-subject (group) and within-subject (time) factors and interactions were used for evaluation of the relationships between continuous variables and factors. Maximal oxygen consumption (VO2max) increased by 12% in vegetarian group (V) (F = 13.1, p < 0.001, partial η2 = 0.171), whereas no significant change was observed in C (F = 0.7, p = 0.667; group × time F = 9.3, p = 0.004, partial η2 = 0.209). Maximal performance (Watt max) increased by 21% in V (F = 8.3, p < 0.001, partial η2 = 0.192), whereas it did not change in C (F = 1.0, p = 0.334; group × time F = 4.2, p = 0.048, partial η2 = 0.116). Our results indicate that V leads more effectively to improvement in physical fitness than C after aerobic exercise program. PMID:27792174
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1995-10-01
Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
NASA Astrophysics Data System (ADS)
Levine, J. S. F.; Mosher, S.
2017-12-01
Older orogenic belts that now expose the middle and lower crust record interaction between partial melting, magmatism, and deformation. A field- and microstructural-based case study from the Wet Mountains of central Colorado, an exhumed section of Proterozoic rock, shows structures associated with anatexis and magmatism, from the grain- to the kilometer-scale, that indicate the interconnection between deformation, partial melting, and magmatism, and allow reconstructions of the processes occurring in hot active orogens. Metamorphic grade, along with the degree of deformation, partial melting, and magmatism increase from northwest to southeast. Deformation synchronous with this high-grade metamorphic event is localized into areas with greater quantities of former melt, and preferential melting occurs within high-strain locations. In the less deformed northwest, partial melting occurs dominantly via muscovite-dehydration melting, with a low abundance of partial melting, and an absence of granitic magmatism. The central Wet Mountains are characterized by biotite dehydration melting, abundant former melt and foliation-parallel inferred melt channels along grain boundaries, and the presence of a nearby granitic pluton. Rocks in the southern portion of the Wet Mountains are characterized by partial melting via both biotite dehydration and granitic wet melting, with widespread partial melting as evidenced by well-preserved former melt microstructures and evidence for back reaction between melt and the host rocks. The southern Wet Mountains has more intense deformation and widespread plutonism than other locations and two generations of dikes and sills. Recognition of textures and fabrics associated with partial melting in older orogens is paramount for interpreting the complex interplay of processes occurring in the cores of orogenic systems.
A cost-benefit model comparing the California Milk Cell Test and Milk Electrical Resistance Test.
Petzer, Inge-Marie; Karzis, Joanne; Meyer, Isabel A; van der Schans, Theodorus J
2013-04-24
The indirect effects of mastitis treatment are often overlooked in cost-benefit analyses, but it may be beneficial for the dairy industry to consider them. The cost of mastitis treatment may increase when the duration of intra-mammary infections are prolonged due to misdiagnosis of host-adapted mastitis. Laboratory diagnosis of mastitis can be costly and time consuming, therefore cow-side tests such as the California Milk Cell Test (CMCT) and Milk Electrical Resistance (MER) need to be utilised to their full potential. The aim of this study was to determine the relative benefit of using these two tests separately and in parallel. This was done using a partial-budget analysis and a cost-benefit model to estimate the benefits and costs of each respective test and the parallel combination thereof. Quarter milk samples (n= 1860) were taken from eight different dairy herds in South Africa. Milk samples were evaluated by means of the CMCT, hand-held MER meter and cyto-microbiological laboratory analysis. After determining the most appropriate cut-off points for the two cow-side tests, the sensitivity and specificity of the CMCT (Se= 1.00, Sp= 0.66), MER (Se= 0.92, Sp= 0.62) and the tests done in parallel (Se= 1.00, Sp= 0.87) were calculated. The input data that were used for partial-budget analysis and in the cost-benefit model were based on South African figures at the time of the study, and on literature. The total estimated financial benefit of correct diagnosis of host-adapted mastitis per cow for the CMCT, MER and the tests done in parallel was R898.73, R518.70 and R1064.67 respectively. This involved taking the expected benefit of a correct test result per cow, the expected cost of an error per cow and the cost of the test into account. The CMCT was shown to be 11%more beneficial than the MER test, whilst using the tests in parallel was shown to be the most beneficial method for evaluating the mastitis-control programme. Therefore, it is recommended that the combined tests should be used strategically in practice to monitor udder health and promote a pro-active udder health approach when dealing with host-adapted pathogens.
NASA Technical Reports Server (NTRS)
1979-01-01
The preliminary design for a prototype small (20 kWe) solar thermal electric generating unit was completed, consisting of several subsystems. The concentrator and the receiver collect solar energy and a thermal buffer storage with a transport system is used to provide a partially smoothed heat input to the Stirling engine. A fossil-fuel combustor is included in the receiver designs to permit operation with partial or no solar insolation (hybrid). The engine converts the heat input into mechanical action that powers a generator. To obtain electric power on a large scale, multiple solar modules will be required to operate in parallel. The small solar electric power plant used as a baseline design will provide electricity at remote sites and small communities.
Transformations of C2-C4 alcohols on the surface of a copper catalyst
NASA Astrophysics Data System (ADS)
Magaeva, A. A.; Lyamina, G. V.; Sudakova, N. N.; Shilyaeva, L. P.; Vodyankina, O. V.
2007-10-01
The interaction of monoatomic alcohols C2-C4 with the surface of a copper catalyst preliminarily oxidized under various conditions was studied by the temperature-programmed reaction method to determine the detailed mechanism of partial oxidation. The conditions of oxygen preadsorption on the surface of copper for the preparation of the desired products were determined. The selective formation of carbonyl compounds was shown to occur at the boundary between reduced and oxidized copper surface regions. The role played by Cu2O was the deep oxidation of alcohols to CO2. Alcohols with branched hydrocarbon structures experienced parallel partial oxidation and dehydrogenation, which was related to the high stability of intermediate keto-type compounds.
NASA Technical Reports Server (NTRS)
Lagowski, J.; Gatos, H. C.; Dabkowski, F. P.
1985-01-01
A novel partially confined configuration is proposed for the crystal growth of semiconductors from the melt, including those with volatile constituents. A triangular prism is employed to contain the growth melt. Due to surface tension, the melt will acquire a cylindrical-like shape and thus contact the prism along three parallel lines. The three empty spaces between the cylindrical melt and the edges of the prism will accommodate the expansion of the solidifying semiconductor, and in the case of semiconductor compounds with a volatile constituent, will permit the presence of the desired vapor phase in contact with the melt for controlling the melt stoichiometry. Theoretical and experimental evidence in support of this new type of confinement is presented.
Extraction electrode geometry for a calutron
Veach, A.M.; Bell, W.A. Jr.
1975-09-23
This patent relates to an improved geometry for the extraction electrode and the ground electrode utilized in the operation of a calutron. The improved electrodes are constructed in a partial-picture-frame fashion with the slits of both electrodes formed by two tungsten elongated rods. Additional parallel spaced-apart rods in each electrode are used to establish equipotential surfaces over the rest of the front of the ion source. (auth)
2014-12-01
normal ( 1S ) and parallel ( 2S ) strain rates squared. U and V are the zonal and meridional velocities and the x and y subscripts indicate partial...between developing and non-developing tropical disturbances appears to lie with the kinematic flow boundary structure and thermodynamic properties ...tropical disturbances appears to lie with the kinematic flow boundary structure and thermodynamic properties hypothesized in the marsupial paradigm
VINE: A Variational Inference -Based Bayesian Neural Network Engine
2018-01-01
networks are trained using the same dataset and hyper parameter settings as discussed. Table 1 Performance evaluation of the proposed transfer learning...multiplication/addition/subtraction. These operations can be implemented using nested loops in which various iterations of a loop are independent of...each other. This introduces an opportunity for optimization where a loop may be unrolled fully or partially to increase parallelism at the cost of
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Accelerated Slice Encoding for Metal Artifact Correction
Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts
2010-01-01
Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445
Accelerated slice encoding for metal artifact correction.
Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts
2010-04-01
To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
Tape casting and partial melting of Bi-2212 thick films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buhl, D.; Lang, T.; Heeb, B.
1994-12-31
To produce Bi-2212 thick films with high critical current densities tape casting and partial melting is a promising fabrication method. Bi-2212 powder and organic additives were mixed into a slurry and tape casted onto glass by the doctor blade tape casting process. The films were cut from the green tape and partially molten on Ag foils during heat treatment. We obtained almost single-phase and well-textured films over the whole thickness of 20 {mu}m. The orientation of the (a,b)-plane of the grains were parallel to the substrate with a misalignment of less than 6{degrees}. At 77K/OT a critical current density ofmore » 15`000 A/cm{sup 2} was reached in films of the dimension 1cm x 2cm x 20{mu}m (1{mu}V/cm criterion, resistively measured). At 4K/OT the highest value was 350`000 A/cm{sup 2} (1nV/cm criterion, magnetically measured).« less
Tape casting and partial melting of Bi-2212 thick films
NASA Technical Reports Server (NTRS)
Buhl, D.; Lang, TH.; Heeb, B.; Gauckler, L. J.
1995-01-01
To produce Bi-2212 thick films with high critical current densities tape casting and partial melting is a promising fabrication method. Bi-2212 powder and organic additives were mixed into a slurry and tape casted onto glass by the doctor blade tape casting process. The films were cut from the green tape and partially molten on Ag foils during heat treatment. We obtained almost single-phase and well-textured films over the whole thickness of 20 microns. The orientation of the (a,b)-plane of the grains was parallel to the substrate with a misalignment of less than 6 deg. At 77 K/0T a critical current density of 15, 000 A/sq cm was reached in films of the dimension 1 cm x 2 cm x 20 microns (1 micron V/cm criterion, resistively measured). At 4 K/0T the highest value was 350,000 A/sq cm (1 nV/cm criterion, magnetically measured).
Gloss, L M; Simler, B R; Matthews, C R
2001-10-05
The folding mechanism of the dimeric Escherichia coli Trp repressor (TR) is a kinetically complex process that involves three distinguishable stages of development. Following the formation of a partially folded, monomeric ensemble of species, within 5 ms, folding to the native dimer is controlled by three kinetic phases. The rate-limiting step in each phase is either a non-proline isomerization reaction or a dimerization reaction, depending on the final denaturant concentration. Two approaches have been employed to test the previously proposed folding mechanism of TR through three parallel channels: (1) unfolding double-jump experiments demonstrate that all three folding channels lead directly to native dimer; and (2) the differential stabilization of the transition state for the final step in folding and the native dimer, by the addition of salt, shows that all three channels involve isomerization of a dimeric species. A refined model for the folding of Trp repressor is presented, in which all three channels involve a rapid dimerization reaction between partially folded monomers followed by the isomerization of the dimeric intermediates to yield native dimer. The ensemble of partially folded monomers can be captured at equilibrium by low pH; one-dimensional proton NMR spectra at pH 2.5 demonstrate that monomers exist in two distinct, slowly interconverting conformations. These data provide a potential structural explanation for the three-channel folding mechanism of TR: random association of two different monomeric forms, which are distinguished by alternative packing modes of the core dimerization domain and the DNA-binding, helix-turn-helix, domain. One, perhaps both, of these packing modes contains non-native contacts. Copyright 2001 Academic Press.
Li, Xiang-Hong; Wang, Jin-Yan; Gao, Ge; Chang, Jing-Yu; Woodward, Donald J; Luo, Fei
2010-05-15
Deep brain stimulation (DBS) has been used in the clinic to treat Parkinson's disease (PD) and other neuropsychiatric disorders. Our previous work has shown that DBS in the subthalamic nucleus (STN) can improve major motor deficits, and induce a variety of neural responses in rats with unilateral dopamine (DA) lesions. In the present study, we examined the effect of STN DBS on reaction time (RT) performance and parallel changes in neural activity in the cortico-basal ganglia regions of partially bilateral DA- lesioned rats. We recorded neural activity with a multiple-channel single-unit electrode system in the primary motor cortex (MI), the STN, and the substantia nigra pars reticulata (SNr) during RT test. RT performance was severely impaired following bilateral injection of 6-OHDA into the dorsolateral part of the striatum. In parallel with such behavioral impairments, the number of responsive neurons to different behavioral events was remarkably decreased after DA lesion. Bilateral STN DBS improved RT performance in 6-OHDA lesioned rats, and restored operational behavior-related neural responses in cortico-basal ganglia regions. These behavioral and electrophysiological effects of DBS lasted nearly an hour after DBS termination. These results demonstrate that a partial DA lesion-induced impairment of RT performance is associated with changes in neural activity in the cortico-basal ganglia circuit. Furthermore, STN DBS can reverse changes in behavior and neural activity caused by partial DA depletion. The observed long-lasting beneficial effect of STN DBS suggests the involvement of the mechanism of neural plasticity in modulating cortico-basal ganglia circuits. (c) 2009 Wiley-Liss, Inc.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
NASA Astrophysics Data System (ADS)
Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.
1994-05-01
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
Accuracy of different impression materials in parallel and nonparallel implants
Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham
2015-01-01
Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620
Ding, Fan; Yao, Jia; Zhao, Liqin; Mao, Zisu; Chen, Shuhua; Brinton, Roberta Diaz
2013-01-01
Previously, we demonstrated that reproductive senescence in female triple transgenic Alzheimer's (3×TgAD) mice was paralleled by a shift towards a ketogenic profile with a concomitant decline in mitochondrial activity in brain, suggesting a potential association between ovarian hormone loss and alteration in the bioenergetic profile of the brain. In the present study, we investigated the impact of ovariectomy and 17β-estradiol replacement on brain energy substrate availability and metabolism in a mouse model of familial Alzheimer's (3×TgAD). Results of these analyses indicated that ovarian hormones deprivation by ovariectomy (OVX) induced a significant decrease in brain glucose uptake indicated by decline in 2-[(18)F]fluoro-2-deoxy-D-glucose uptake measured by microPET-imaging. Mechanistically, OVX induced a significant decline in blood-brain-barrier specific glucose transporter expression, hexokinase expression and activity. The decline in glucose availability was accompanied by a significant rise in glial LDH5 expression and LDH5/LDH1 ratio indicative of lactate generation and utilization. In parallel, a significant rise in ketone body concentration in serum occurred which was coupled to an increase in neuronal MCT2 expression and 3-oxoacid-CoA transferase (SCOT) required for conversion of ketone bodies to acetyl-CoA. In addition, OVX-induced decline in glucose metabolism was paralleled by a significant increase in Aβ oligomer levels. 17β-estradiol preserved brain glucose-driven metabolic capacity and partially prevented the OVX-induced shift in bioenergetic substrate as evidenced by glucose uptake, glucose transporter expression and gene expression associated with aerobic glycolysis. 17β-estradiol also partially prevented the OVX-induced increase in Aβ oligomer levels. Collectively, these data indicate that ovarian hormone loss in a preclinical model of Alzheimer's was paralleled by a shift towards the metabolic pathway required for metabolism of alternative fuels in brain with a concomitant decline in brain glucose transport and metabolism. These findings also indicate that estrogen plays a critical role in sustaining brain bioenergetic capacity through preservation of glucose metabolism.
1990-09-01
iNaval P’ostgraduate School (if applicable) MIR Naval l’ostgraduate School 6c Address (city, stair’. and ZIP codr) 7b Address (elty, stai, and ZIP’ codr...Lieutenant, United States Navy B.A., Ithaca College, 1975 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN
Free-electron laser simulations on the MPP
NASA Technical Reports Server (NTRS)
Vonlaven, Scott A.; Liebrock, Lorie M.
1987-01-01
Free electron lasers (FELs) are of interest because they provide high power, high efficiency, and broad tunability. FEL simulations can make efficient use of computers of the Massively Parallel Processor (MPP) class because most of the processing consists of applying a simple equation to a set of identical particles. A test version of the KMS Fusion FEL simulation, which resides mainly in the MPPs host computer and only partially in the MPP, has run successfully.
A Higher-Order Trapezoidal Vector Vortex Panel for Subsonic Flow.
1980-12-01
Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...Requirements for the Degree of Master of Science by Ronald E. Luther, B.S. Capt USAF Graduate Aeronautical Engineering December 1980 Approved for public... methd also permits analysis of cranked leading and/or trailiig edges. The root edge, tip edge and all chordwise boundaries are parallel to the x-axis
NonLinear Optical Spectroscopy of Polymers
1989-01-01
temperature is reduced by a negligible amount, although at higher temperatures, the relaxation occurs more rapidly. 25- 20’ Needle Electrode N =15 Cr .-10C...sample in 23a was poled with a needle electrode , while the sample in 23b was poled by parallel wire electrodes . Mortazavl et al. (104) conducted...when an Inhomogeneous electric field causes a partial breakdown in a gas between the electrodes . Two electrode configurations were tested: a needle
Trench-parallel flow beneath the nazca plate from seismic anisotropy.
Russo, R M; Silver, P G
1994-02-25
Shear-wave splitting of S and SKS phases reveals the anisotropy and strain field of the mantle beneath the subducting Nazca plate, Cocos plate, and the Caribbean region. These observations can be used to test models of mantle flow. Two-dimensional entrained mantle flow beneath the subducting Nazca slab is not consistent with the data. Rather, there is evidence for horizontal trench-parallel flow in the mantle beneath the Nazca plate along much of the Andean subduction zone. Trench-parallel flow is attributale utable to retrograde motion of the slab, the decoupling of the slab and underlying mantle, and a partial barrier to flow at depth, resulting in lateral mantle flow beneath the slab. Such flow facilitates the transfer of material from the shrinking mantle reservoir beneath the Pacific basin to the growing mantle reservoir beneath the Atlantic basin. Trenchparallel flow may explain the eastward motions of the Caribbean and Scotia sea plates, the anomalously shallow bathymetry of the eastern Nazca plate, the long-wavelength geoid high over western South America, and it may contribute to the high elevation and intense deformation of the central Andes.
Energy Dependence of Electron-Scale Currents and Dissipation During Magnetopause Reconnection
NASA Astrophysics Data System (ADS)
Shuster, J. R.; Gershman, D. J.; Giles, B. L.; Dorelli, J.; Avanov, L. A.; Chen, L. J.; Wang, S.; Bessho, N.; Torbert, R. B.; Farrugia, C. J.; Argall, M. R.; Strangeway, R. J.; Schwartz, S. J.
2017-12-01
We investigate the electron-scale physics of reconnecting current structures observed at the magnetopause during Phase 1B of the Magnetospheric Multiscale (MMS) mission when the spacecraft separation was less than 10 km. Using single-spacecraft measurements of the current density vector Jplasma = en(vi - ve) enabled by the accuracy of the Fast Plasma Investigation (FPI) electron moments as demonstrated by Phan et al. [2016], we consider perpendicular (J⊥1 and J⊥2) and parallel (J//) currents and their corresponding kinetic electron signatures. These currents can correspond to a variety of structures in the electron velocity distribution functions measured by FPI, including perpendicular and parallel crescents like those first reported by Burch et al. [2016], parallel electron beams, counter-streaming electron populations, or sometimes simply a bulk velocity shift. By integrating the distribution function over only its angular dimensions, we compute energy-dependent 'partial' moments and employ them to characterize the energy dependence of velocities, currents, and dissipation associated with magnetic reconnection diffusion regions caught by MMS. Our technique aids in visualizing and elucidating the plasma energization mechanisms that operate during collisionless reconnection.
Super-resolved Parallel MRI by Spatiotemporal Encoding
Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio
2016-01-01
Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
NASA Astrophysics Data System (ADS)
Petrović, Suzana; Peruško, D.; Kovač, J.; Panjan, P.; Mitrić, M.; Pjević, D.; Kovačević, A.; Jelenković, B.
2017-09-01
Formation of periodic nanostructures on the Ti/5x(Al/Ti)/Si multilayers induced by picosecond laser pulses is studied in order to better understand the formation of a laser-induced periodic surface structure (LIPSS). At fluence slightly below the ablation threshold, the formation of low spatial frequency-LIPSS (LSFL) oriented perpendicular to the direction of the laser polarization is observed on the irradiated area. Prolonged irradiation while scanning results in the formation of a high spatial frequency-LIPSS (HSFL), on top of the LSFLs, creating a co-existence parallel periodic structure. HSFL was oriented parallel to the incident laser polarization. Intermixing between the Al and Ti layers with the formation of Al-Ti intermetallic compounds was achieved during the irradiation. The intermetallic region was formed mostly within the heat affected zone of the sample. Surface segregation of aluminium with partial ablation of the top layer of titanium was followed by the formation of an ultra-thin Al2O3 film on the surface of the multi-layered structure.
Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations
NASA Astrophysics Data System (ADS)
Husa, Sascha; Hinder, Ian; Lechner, Christiane
2006-06-01
We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus
TURBULENCE-GENERATED PROTON-SCALE STRUCTURES IN THE TERRESTRIAL MAGNETOSHEATH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vörös, Zoltán; Narita, Yasuhito; Yordanova, Emiliya
2016-03-01
Recent results of numerical magnetohydrodynamic simulations suggest that in collisionless space plasmas, turbulence can spontaneously generate thin current sheets. These coherent structures can partially explain the intermittency and the non-homogenous distribution of localized plasma heating in turbulence. In this Letter, Cluster multi-point observations are used to investigate the distribution of magnetic field discontinuities and the associated small-scale current sheets in the terrestrial magnetosheath downstream of a quasi-parallel bow shock. It is shown experimentally, for the first time, that the strongest turbulence-generated current sheets occupy the long tails of probability distribution functions associated with extremal values of magnetic field partial derivatives.more » During the analyzed one-hour time interval, about a hundred strong discontinuities, possibly proton-scale current sheets, were observed.« less
PyPWA: A partial-wave/amplitude analysis software framework
NASA Astrophysics Data System (ADS)
Salgado, Carlos
2016-05-01
The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.
Shlygin, V V; Tiuliaev, A P; Ioĭleva, E E; Maksimov, G V
2004-01-01
An approach to the choice of the parameters of physiotherapeutic and biophysical influence on the visual nerve was proposed. The approach is based on parallel photo- and magnetostimulation of excitable fibers in which the morphological and electrophysiological properties of fibers and some parameters of the pathological processes associated with partial artophy and ischemia are taken into account. A method for correlating the photostimulation by light flashes (intensity 65 mWt at emission wavelength 660 nm) of a portion of the retina with the choice of the parameters of magnetic influence (amplitude 73 mT, duration of the wave front of 40 ms, and frequency of pulse sequence of about 1 Hz) on the visual nerve was developed.
A Technique to Facilitate Tooth Modification for Removable Partial Denture Prosthesis Guide Planes.
Haeberle, C Brent; Abreu, Amara; Metzler, Kurt
2016-07-01
The technique in this article was developed to provide a means to create prepared guide planes of proper dimension to ensure a more stable and retentive removable partial denture prosthesis (RPDP) framework when providing this service for a patient. Using commonly found clinical materials, a paralleling device can be fabricated from the modified diagnostic cast of the patient's dental arch requiring an RPDP. Polymethyl methacrylate or composite added to an altered thermoplastic form can be positioned intraorally and used as a guide to predictably adjust tooth structure for guide planes. Since it can potentially minimize the number of impressions and diagnostic casts made during the procedure, this can help achieve the desired result more efficiently and quickly for the patient. © 2015 by the American College of Prosthodontists.
Buchheit, R G; Schreiner, H R; Doebbler, G F
1966-02-01
Buchheit, R. G. (Union Carbide Corp., Tonawanda, N.Y.), H. R. Schreiner, and G. F. Doebbler. Growth responses of Neurospora crassa to increased partial pressures of the noble gases and nitrogen. J. Bacteriol. 91:622-627. 1966.-Growth rate of the fungus Neurospora crassa depends in part on the nature of metabolically "inert gas" present in its environment. At high partial pressures, the noble gas elements (helium, neon, argon, krypton, and xenon) inhibit growth in the order: Xe > Kr> Ar > Ne > He. Nitrogen (N(2)) closely resembles He in inhibitory effectiveness. Partial pressures required for 50% inhibition of growth were: Xe (0.8 atm), Kr (1.6 atm), Ar (3.8 atm), Ne (35 atm), and He ( approximately 300 atm). With respect to inhibition of growth, the noble gases and N(2) differ qualitatively and quantitatively from the order of effectiveness found with other biological effects, i.e., narcosis, inhibition of insect development, depression of O(2)-dependent radiation sensitivity, and effects on tissue-slice glycolysis and respiration. Partial pressures giving 50% inhibition of N. crassa growth parallel various physical properties (i.e., solubilities, solubility ratios, etc.) of the noble gases. Linear correlation of 50% inhibition pressures to the polarizability and of the logarithm of pressure to the first and second ionization potentials suggests the involvement of weak intermolecular interactions or charge-transfer in the biological activity of the noble gases.
MOOSE: A PARALLEL COMPUTATIONAL FRAMEWORK FOR COUPLED SYSTEMS OF NONLINEAR EQUATIONS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Hansen; C. Newman; D. Gaston
Systems of coupled, nonlinear partial di?erential equations often arise in sim- ulation of nuclear processes. MOOSE: Multiphysics Ob ject Oriented Simulation Environment, a parallel computational framework targeted at solving these systems is presented. As opposed to traditional data / ?ow oriented com- putational frameworks, MOOSE is instead founded on mathematics based on Jacobian-free Newton Krylov (JFNK). Utilizing the mathematical structure present in JFNK, physics are modularized into “Kernels” allowing for rapid production of new simulation tools. In addition, systems are solved fully cou- pled and fully implicit employing physics based preconditioning allowing for a large amount of ?exibility even withmore » large variance in time scales. Background on the mathematics, an inspection of the structure of MOOSE and several rep- resentative solutions from applications built on the framework are presented.« less
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
NASA Technical Reports Server (NTRS)
Korzennik, Sylvain
1997-01-01
Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.
Planning in subsumption architectures
NASA Technical Reports Server (NTRS)
Chalfant, Eugene C.
1994-01-01
A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
MOOSE: A parallel computational framework for coupled systems of nonlinear equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Gaston; Chris Newman; Glen Hansen
Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less
The Threshold Shortest Path Interdiction Problem for Critical Infrastructure Resilience Analysis
2017-09-01
being pushed over the minimum designated threshold. 1.4 Motivation A simple setting to motivate this research is the “30 minutes or it’s free” guarantee...parallel network structure in Fig. 4.4 is simple in design , yet shows a relatively high resilience when compared to the other networks in general. The high...United States Naval Academy, 2002 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH
Feasibility study: Liquid hydrogen plant, 30 tons per day
NASA Technical Reports Server (NTRS)
1975-01-01
The design considerations of the plant are discussed in detail along with management planning, objective schedules, and cost estimates. The processing scheme is aimed at ultimate use of coal as the basic raw material. For back-up, and to provide assurance of a dependable and steady supply of hydrogen, a parallel and redundant facility for gasifying heavy residual oil will be installed. Both the coal and residual oil gasifiers will use the partial oxidation process.
Exploiting Data Sparsity in Parallel Matrix Powers Computations
2013-05-03
2013 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...matrices of the form A = D+USV H, where D is sparse and USV H has low rank but may be dense. Matrices of this form arise in many practical applications...methods numerical partial di erential equation solvers, and preconditioned iterative methods. If A has this form , our algorithm enables a communication
A software tool for dataflow graph scheduling
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1994-01-01
A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.
Simulation of Locking Space Truss Deployments for a Large Deployable Sparse Aperture Reflector
2015-03-01
Dr. Alan Jennings, for his unending patience with my struggles through this entire process . Without his expertise, guidance, and trust I would have...engineer since they are not automatically meshed. Fortunately, the mesh process is quite swift. Figure 13 shows both a linear hexahedral element as well...less than that of the serial process . Therefore, COMSOL’s partially parallelized algorithms will not be sped up as a function of cores added and is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslam, J J; Wall, M A; Johnson, D L
We have measured and modeled the change in electrical resistivity due to partial transformation to the martensitic {alpha}{prime}-phase in a {delta}-phase Pu-Ga matrix. The primary objective is to relate the change in resistance, measured with a 4-probe technique during the transformation, to the volume fraction of the {alpha}{prime} phase created in the microstructure. Analysis by finite element methods suggests that considerable differences in the resistivity may be anticipated depending on the orientational and morphological configurations of the {alpha}{prime} particles. Finite element analysis of the computed resistance of an assembly of lenticular shaped particles indicates that series resistor or parallel resistormore » approximations are inaccurate and can lead to an underestimation of the predicted amount of {alpha}{prime} in the sample by 15% or more. Comparison of the resistivity of a simulated network of partially transformed grains or portions of grains suggests that a correction to the measured resistivity allows quantification of the amount of {alpha}{prime} phase in the microstructure with minimal consideration of how the {alpha}{prime} morphology may evolve. It is found that the average of the series and parallel resistor approximations provide the most accurate relationship between the measured resistivity and the amount of {alpha}{prime} phase. The methods described here are applicable to any evolving two-phase microstructure in which the resistance difference between the two phases is measurable.« less
Alarcón, Francis; Báez, María E; Bravo, Manuel; Richter, Pablo; Escandar, Graciela M; Olivieri, Alejandro C; Fuentes, Edwar
2013-01-15
The possibility of simultaneously determining seven concerned heavy polycyclic aromatic hydrocarbons (PAHs) of the US-EPA priority pollutant list, in extra virgin olive and sunflower oils was examined using unfolded partial least-squares with residual bilinearization (U-PLS/RBL) and parallel factor analysis (PARAFAC). Both of these methods were applied to fluorescence excitation emission matrices. The compounds studied were benzo[a]anthracene, benzo[b]fluoranthene, benzo[k]fluoranthene, benzo[a]pyrene, dibenz[a,h]anthracene, benzo[g,h,i]perylene and indeno[1,2,3-c,d]-pyrene. The analysis was performed using fluorescence spectroscopy after a microwave assisted liquid-liquid extraction and solid-phase extraction on silica. The U-PLS/RBL algorithm exhibited the best performance for resolving the heavy PAH mixture in the presence of both the highly complex oil matrix and other unpredicted PAHs of the US-EPA list. The obtained limit of detection for the proposed method ranged from 0.07 to 2 μg kg(-1). The predicted U-PLS/RBL concentrations were satisfactorily compared with those obtained using high-performance liquid chromatography with fluorescence detection. A simple analysis with a considerable reduction in time and solvent consumption in comparison with chromatography are the principal advantages of the proposed method. Copyright © 2012 Elsevier B.V. All rights reserved.
Lamotrigine add-on for drug-resistant partial epilepsy.
Ramaratnam, S; Marson, A G; Baker, G A
2001-01-01
Epilepsy is a common neurological disorder, affecting almost 0.5 to 1% of the population. Nearly 30% of patients with epilepsy are refractory to currently available drugs. Lamotrigine is one of the newer antiepileptic drugs and is the topic of this review. To examine the effects of lamotrigine on seizures, side effects, cognition and quality of life, when used as an add-on treatment for patients with drug-resistant partial epilepsy. We searched the Cochrane Epilepsy Group trials register, the Cochrane Controlled Trials Register (Cochrane Library Issue 2, 2001), MEDLINE (January 1966 to April 2001) and reference lists of articles. We also contacted the manufacturers of lamotrigine (Glaxo-Wellcome). Randomized placebo controlled trials, of patients with drug-resistant partial epilepsy of any age, in which an adequate method of concealment of randomization was used. The studies may be double, single or unblinded. For crossover studies, the first treatment period was treated as a parallel trial. Two reviewers independently assessed the trials for inclusion and extracted data. Primary analyses were by intention to treat. Outcomes included 50% or greater reduction in seizure frequency, treatment withdrawal (any reason), side effects, effects on cognition, and quality of life. We found three parallel add-on studies and eight cross-over studies, which included 1243 patients (199 children and 1044 adults). The overall Peto's Odds Ratio (OR) and 95% confidence intervals (CIs) across all studies for 50% or greater reduction in seizure frequency was 2.71 (1.87, 3.91) indicating that lamotrigine is significantly more effective than placebo in reducing seizure frequency. The overall OR (95%CI) for treatment withdrawal (for any reason) is 1.12 (0.78, 1.61). The 99% CIs for ataxia, dizziness, nausea, and diplopia do not include unity, indicating that they are significantly associated with lamotrigine. The limited data available precludes any conclusions about effects on cognition and quality of life, though there may be minor benefits in affect balance (happiness) and mastery. Lamotrigine add-on therapy is effective in reducing the seizure frequency, in patients with drug-resistant partial epilepsy. Further trials are needed to assess the long term effects of lamotrigine, and to compare it with other add-on drugs.
Lamotrigine add-on for drug-resistant partial epilepsy.
Ramaratnam, S; Marson, A G; Baker, G A
2000-01-01
Epilepsy is a common neurological disorder, affecting almost 0.5 to 1% of the population. Nearly 30% of patients with epilepsy are refractory to currently available drugs. Lamotrigine is one of the newer antiepileptic drugs and is the topic of this review. To examine the effects of lamotrigine on seizures, side effects, cognition and quality of life, when used as an add-on treatment for patients with drug-resistant partial epilepsy. We searched the Cochrane Epilepsy Group trials register, the Cochrane Controlled Trials Register (Cochrane Library Issue 1, 2000), MEDLINE (January 1966 to December 1999) and reference lists of articles. We also contacted the manufacturers of lamotrigine (Glaxo-Wellcome). Randomized placebo controlled trials, of patients with drug-resistant partial epilepsy of any age, in which an adequate method of concealment of randomization was used. The studies may be double, single or unblinded. For crossover studies, the first treatment period was treated as a parallel trial. Two reviewers independently assessed the trials for inclusion and extracted data. Primary analyses were by intention to treat. Outcomes included 50% or greater reduction in seizure frequency, treatment withdrawal (any reason), side effects, effects on cognition, and quality of life. We found three parallel add-on studies and eight cross-over studies, which included 1243 patients (199 children and 1044 adults). The overall Peto's Odds Ratio (OR) and 95% confidence intervals (CIs) across all studies for 50% or greater reduction in seizure frequency was 2.71 (1.87, 3.91) indicating that lamotrigine is significantly more effective than placebo in reducing seizure frequency. The overall OR (95%CI) for treatment withdrawal (for any reason) is 1.12 (0.78, 1. 61). The 99% CIs for ataxia, dizziness, nausea, and diplopia do not include unity, indicating that they are significantly associated with lamotrigine. The limited data available preclude any conclusions about effects on cognition and quality of life, though there may be minor benefits in affect balance (happiness) and mastery. Lamotrigine add-on therapy is effective in reducing the seizure frequency, in patients with drug-resistant partial epilepsy. Further trials are needed to assess the long term effects of lamotrigine, and to compare it with other add-on drugs.
NASA Astrophysics Data System (ADS)
Cappa, Paolo; Sciuto, Salvatore Andrea; Silvestri, Sergio
2002-06-01
A patient active simulator is proposed which is capable of reproducing values of the parameters of pulmonary mechanics of healthy newborns and preterm pathological infants. The implemented prototype is able to: (a) let the operator choose the respiratory pattern, times of apnea, episodes of cough, sobs, etc., (b) continuously regulate and control the parameters characterizing the pulmonary system; and, finally, (c) reproduce the attempt of breathing of a preterm infant. Taking into account both the limitation due to the chosen application field and the preliminary autocalibration phase automatically carried out by the proposed device, accuracy and reliability on the order of 1% is estimated. The previously indicated value has to be considered satisfactory in light of the field of application and the small values of the simulated parameters. Finally, the achieved metrological characteristics allow the described neonatal simulator to be adopted as a reference device to test performances of neonatal ventilators and, more specifically, to measure the time elapsed between the occurrence of a potentially dangerous condition to the patient and the activation of the corresponding alarm of the tested ventilator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Lu, Han; Liu, Xiaochang; Zhang, Yuemei; Wang, Hang; Luo, Yongkang
2015-12-01
To investigate the effects of chilling and partial freezing on rigor mortis changes in bighead carp (Aristichthys nobilis), pH, cathepsin B, cathepsin B+L activities, SDS-PAGE of sarcoplasmic and myofibrillar proteins, texture, and changes in microstructure of fillets at 4 °C and -3 °C were determined at 0, 2, 4, 8, 12, 24, 48, and 72 h after slaughter. The results indicated that pH of fillets (6.50 to 6.80) was appropriate for cathepsin function during the rigor mortis. For fillets that were chilled and partially frozen, the cathepsin activity in lysosome increased consistently during the first 12 h, followed by a decrease from the 12 to 24 h, which paralleled an increase in activity in heavy mitochondria, myofibrils and sarcoplasm. There was no significant difference in cathepsin activity in lysosomes between fillets at 4 °C and -3 °C (P > 0.05). Partially frozen fillets had greater cathepsin activity in heavy mitochondria than chilled samples from the 48 to 72 h. In addition, partially frozen fillets showed higher cathepsin activity in sarcoplasm and lower cathepsin activity in myofibrils compared with chilled fillets. Correspondingly, we observed degradation of α-actinin (105 kDa) by cathepsin L in chilled fillets and degradation of creatine kinase (41 kDa) by cathepsin B in partially frozen fillets during the rigor mortis. The decline of hardness for both fillets might be attributed to the accumulation of cathepsin in myofibrils from the 8 to 24 h. The lower cathepsin activity in myofibrils for fillets that were partially frozen might induce a more intact cytoskeletal structure than fillets that were chilled. © 2015 Institute of Food Technologists®
Increased Energy Delivery for Parallel Battery Packs with No Regulated Bus
NASA Astrophysics Data System (ADS)
Hsu, Chung-Ti
In this dissertation, a new approach to paralleling different battery types is presented. A method for controlling charging/discharging of different battery packs by using low-cost bi-directional switches instead of DC-DC converters is proposed. The proposed system architecture, algorithms, and control techniques allow batteries with different chemistry, voltage, and SOC to be properly charged and discharged in parallel without causing safety problems. The physical design and cost for the energy management system is substantially reduced. Additionally, specific types of failures in the maximum power point tracking (MPPT) in a photovoltaic (PV) system when tracking only the load current of a DC-DC converter are analyzed. The periodic nonlinear load current will lead MPPT realized by the conventional perturb and observe (P&O) algorithm to be problematic. A modified MPPT algorithm is proposed and it still only requires typically measured signals, yet is suitable for both linear and periodic nonlinear loads. Moreover, for a modular DC-DC converter using several converters in parallel, the input power from PV panels is processed and distributed at the module level. Methods for properly implementing distributed MPPT are studied. A new approach to efficient MPPT under partial shading conditions is presented. The power stage architecture achieves fast input current change rate by combining a current-adjustable converter with a few converters operating at a constant current.
NASA Astrophysics Data System (ADS)
Bin-Mohsin, Bandar; Ahmed, Naveed; Adnan; Khan, Umar; Tauseef Mohyud-Din, Syed
2017-04-01
This article deals with the bioconvection flow in a parallel-plate channel. The plates are parallel and the flowing fluid is saturated with nanoparticles, and water is considered as a base fluid because microorganisms can survive only in water. A highly nonlinear and coupled system of partial differential equations presenting the model of bioconvection flow between parallel plates is reduced to a nonlinear and coupled system (nondimensional bioconvection flow model) of ordinary differential equations with the help of feasible nondimensional variables. In order to find the convergent solution of the system, a semi-analytical technique is utilized called variation of parameters method (VPM). Numerical solution is also computed and the Runge-Kutta scheme of fourth order is employed for this purpose. Comparison between these solutions has been made on the domain of interest and found to be in excellent agreement. Also, influence of various parameters has been discussed for the nondimensional velocity, temperature, concentration and density of the motile microorganisms both for suction and injection cases. Almost inconsequential influence of thermophoretic and Brownian motion parameters on the temperature field is observed. An interesting variation are inspected for the density of the motile microorganisms due to the varying bioconvection parameter in suction and injection cases. At the end, we make some concluding remarks in the light of this article.
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Detection of partial-thickness tears in ligaments and tendons by Stokes-polarimetry imaging
NASA Astrophysics Data System (ADS)
Kim, Jihoon; John, Raheel; Walsh, Joseph T.
2008-02-01
A Stokes polarimetry imaging (SPI) system utilizes an algorithm developed to construct degree of polarization (DoP) image maps from linearly polarized light illumination. Partial-thickness tears of turkey tendons were imaged by the SPI system in order to examine the feasibility of the system to detect partial-thickness rotator cuff tear or general tendon pathology. The rotating incident polarization angle (IPA) for the linearly polarized light provides a way to analyze different tissue types which may be sensitive to IPA variations. Degree of linear polarization (DoLP) images revealed collagen fiber structure, related to partial-thickness tears, better than standard intensity images. DoLP images also revealed structural changes in tears that are related to the tendon load. DoLP images with red-wavelength-filtered incident light may show tears and related organization of collagen fiber structure at a greater depth from the tendon surface. Degree of circular polarization (DoCP) images exhibited well the horizontal fiber orientation that is not parallel to the vertically aligned collagen fibers of the tendon. The SPI system's DOLP images reveal alterations in tendons and ligaments, which have a tissue matrix consisting largely of collagen, better than intensity images. All polarized images showed modulated intensity as the IPA was varied. The optimal detection of the partial-thickness tendon tears at a certain IPA was observed. The SPI system with varying IPA and spectral information can improve the detection of partial-thickness rotator cuff tears by higher visibility of fiber orientations and thereby improve diagnosis and treatment of tendon related injuries.
McNicholl, Janet M.
2016-01-01
ABSTRACT Biomedical preventions for HIV, such as vaccines, microbicides or pre-exposure prophylaxis (PrEP) with antiretroviral drugs, can each only partially prevent HIV-1 infection in most human trials. Oral PrEP is now FDA approved for HIV-prevention in high risk groups, but partial adherence reduces efficacy. If combined as biomedical preventions (CBP) an HIV vaccine could provide protection when PrEP adherence is low and PrEP could prevent vaccine breakthroughs. Other types of PrEP or microbicides may also be partially protective. When licensed, first generation HIV vaccines are likely to be partially effective. Individuals at risk for HIV may receive an HIV vaccine combined with other biomedical preventions, in series or in parallel, in clinical trials or as part of standard of care, with the goal of maximally increasing HIV prevention. In human studies, it is challenging to determine which preventions are best combined, how they interact and how effective they are. Animal models can determine CBP efficacy, whether additive or synergistic, the efficacy of different products and combinations, dose, timing and mechanisms. CBP studies in macaques have shown that partially or minimally effective candidate HIV vaccines combined with partially effective oral PrEP, vaginal PrEP or microbicide generally provided greater protection than either prevention alone against SIV or SHIV challenges. Since human CBP trials will be complex, animal models can guide their design, sample size, endpoints, correlates and surrogates of protection. This review focuses on animal studies and human models of CBP and discusses implications for HIV prevention. PMID:27679928
McNicholl, Janet M
2016-12-01
Biomedical preventions for HIV, such as vaccines, microbicides or pre-exposure prophylaxis (PrEP) with antiretroviral drugs, can each only partially prevent HIV-1 infection in most human trials. Oral PrEP is now FDA approved for HIV-prevention in high risk groups, but partial adherence reduces efficacy. If combined as biomedical preventions (CBP) an HIV vaccine could provide protection when PrEP adherence is low and PrEP could prevent vaccine breakthroughs. Other types of PrEP or microbicides may also be partially protective. When licensed, first generation HIV vaccines are likely to be partially effective. Individuals at risk for HIV may receive an HIV vaccine combined with other biomedical preventions, in series or in parallel, in clinical trials or as part of standard of care, with the goal of maximally increasing HIV prevention. In human studies, it is challenging to determine which preventions are best combined, how they interact and how effective they are. Animal models can determine CBP efficacy, whether additive or synergistic, the efficacy of different products and combinations, dose, timing and mechanisms. CBP studies in macaques have shown that partially or minimally effective candidate HIV vaccines combined with partially effective oral PrEP, vaginal PrEP or microbicide generally provided greater protection than either prevention alone against SIV or SHIV challenges. Since human CBP trials will be complex, animal models can guide their design, sample size, endpoints, correlates and surrogates of protection. This review focuses on animal studies and human models of CBP and discusses implications for HIV prevention.
A system for environmental model coupling and code reuse: The Great Rivers Project
NASA Astrophysics Data System (ADS)
Eckman, B.; Rice, J.; Treinish, L.; Barford, C.
2008-12-01
As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration of parameters (fudge factors) to produce scientifically valid results, we are also developing an autocalibration engine. Finally, visualization is a key element of this modeling framework strategy, both to convey complex scientific data effectively, and also to enable non-expert users to make full use of the relevant features of the framework. We are developing a visualization environment with a strong data model, to enable visualizations, model results, and data all to be handled similarly.
Discussion summary: Fictitious domain methods
NASA Technical Reports Server (NTRS)
Glowinski, Rowland; Rodrigue, Garry
1991-01-01
Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.
Assembly planning based on subassembly extraction
NASA Technical Reports Server (NTRS)
Lee, Sukhan; Shin, Yeong Gil
1990-01-01
A method is presented for the automatic determination of assembly partial orders from a liaison graph representation of an assembly through the extraction of preferred subassemblies. In particular, the authors show how to select a set of tentative subassemblies by decomposing a liaison graph into a set of subgraphs based on feasibility and difficulty of disassembly, how to evaluate each of the tentative subassemblies in terms of assembly cost using the subassembly selection indices, and how to construct a hierarchical partial order graph (HPOG) as an assembly plan. The method provides an approach to assembly planning by identifying spatial parallelism in assembly as a means of constructing temporal relationships among assembly operations and solves the problem of finding a cost-effective assembly plan in a flexible environment. A case study of the assembly planning of a mechanical assembly is presented.
Stowage and Deployment of Slit Tube Booms
NASA Technical Reports Server (NTRS)
Adams, Larry (Inventor); Turse, Dana (Inventor); Richardson, Doug (Inventor)
2016-01-01
A system comprising a boom having a first end, a longitudinal length, and a slit that extends along the longitudinal length of the boom; a drum having an elliptic cross section and a longitudinal length; an attachment mechanism coupled with the first end of the boom and the drum such that the boom and the drum are substantially perpendicular relative to one another; an inner shaft having a longitudinal length, the inner shaft disposed within the drum, the longitudinal length of the inner shaft is aligned substantially parallel with the longitudinal length of the drum, the inner shaft at least partially rotatable relative to the drum, and the inner shaft is at least partially rotatable with the drum; and at least two cords coupled with the inner shaft and portions of the boom near the first end of the boom.
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
CosApps: Simulate gravitational lensing through ray tracing and shear calculation
NASA Astrophysics Data System (ADS)
Coss, David
2017-12-01
Cosmology Applications (CosApps) provides tools to simulate gravitational lensing using two different techniques, ray tracing and shear calculation. The tool ray_trace_ellipse calculates deflection angles on a grid for light passing a deflecting mass distribution. Using MPI, ray_trace_ellipse may calculate deflection in parallel across network connected computers, such as cluster. The program physcalc calculates the gravitational lensing shear using the relationship of convergence and shear, described by a set of coupled partial differential equations.
2014-01-01
2013b), increase expression of deafness genes (Valiyaveettil et al., 2012), and alter cochlear blood flow (Chen et al., 2013b), as well as result in...Intense noise exposure has been shown to reduce partial oxygen pressure and cochlear blood flow (Scheibe et al., 1992, 1993, Lamm and Arnold, 1999...found in the cochlear microvasculature and spiral ganglia (Gosepath, 1997; Franz, 1996) and has been shown to maintain cerebral blood flow and blood
Robust synchronization of spin-torque oscillators with an LCR load.
Pikovsky, Arkady
2013-09-01
We study dynamics of a serial array of spin-torque oscillators with a parallel inductor-capacitor-resistor (LCR) load. In a large range of parameters the fully synchronous regime, where all the oscillators have the same state and the output field is maximal, is shown to be stable. However, not always such a robust complete synchronization develops from a random initial state; in many cases nontrivial clustering is observed, with a partial synchronization resulting in a quasiperiodic or chaotic mean-field dynamics.
Introduction to Numerical Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoonover, Joseph A.
2016-06-14
These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.
Linear and nonlinear stability of the Blasius boundary layer
NASA Technical Reports Server (NTRS)
Bertolotti, F. P.; Herbert, TH.; Spalart, P. R.
1992-01-01
Two new techniques for the study of the linear and nonlinear instability in growing boundary layers are presented. The first technique employs partial differential equations of parabolic type exploiting the slow change of the mean flow, disturbance velocity profiles, wavelengths, and growth rates in the streamwise direction. The second technique solves the Navier-Stokes equation for spatially evolving disturbances using buffer zones adjacent to the inflow and outflow boundaries. Results of both techniques are in excellent agreement. The linear and nonlinear development of Tollmien-Schlichting (TS) waves in the Blasius boundary layer is investigated with both techniques and with a local procedure based on a system of ordinary differential equations. The results are compared with previous work and the effects of non-parallelism and nonlinearity are clarified. The effect of nonparallelism is confirmed to be weak and, consequently, not responsible for the discrepancies between measurements and theoretical results for parallel flow.
Sparse Partial Equilibrium Tables in Chemically Resolved Reactive Flow
NASA Astrophysics Data System (ADS)
Vitello, Peter; Fried, Laurence E.; Pudliner, Brian; McAbee, Tom
2004-07-01
The detonation of an energetic material is the result of a complex interaction between kinetic chemical reactions and hydrodynamics. Unfortunately, little is known concerning the detailed chemical kinetics of detonations in energetic materials. CHEETAH uses rate laws to treat species with the slowest chemical reactions, while assuming other chemical species are in equilibrium. CHEETAH supports a wide range of elements and condensed detonation products and can also be applied to gas detonations. A sparse hash table of equation of state values is used in CHEETAH to enhance the efficiency of kinetic reaction calculations. For large-scale parallel hydrodynamic calculations, CHEETAH uses parallel communication to updates to the cache. We present here details of the sparse caching model used in the CHEETAH coupled to an ALE hydrocode. To demonstrate the efficiency of modeling using a sparse cache model we consider detonations in energetic materials.
Unweighted least squares phase unwrapping by means of multigrid techniques
NASA Astrophysics Data System (ADS)
Pritt, Mark D.
1995-11-01
We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.
Radiation-MHD Simulations of Pillars and Globules in HII Regions
NASA Astrophysics Data System (ADS)
Mackey, J.
2012-07-01
Implicit and explicit raytracing-photoionisation algorithms have been implemented in the author's radiation-magnetohydrodynamics code. The algorithms are described briefly and their efficiency and parallel scaling are investigated. The implicit algorithm is more efficient for calculations where ionisation fronts have very supersonic velocities, and the explicit algorithm is favoured in the opposite limit because of its better parallel scaling. The implicit method is used to investigate the effects of initially uniform magnetic fields on the formation and evolution of dense pillars and cometary globules at the boundaries of HII regions. It is shown that for weak and medium field strengths an initially perpendicular field is swept into alignment with the pillar during its dynamical evolution, matching magnetic field observations of the ‘Pillars of Creation’ in M16. A strong perpendicular magnetic field remains in its initial configuration and also confines the photoevaporation flow into a bar-shaped, dense, ionised ribbon which partially shields the ionisation front.
NASA Astrophysics Data System (ADS)
Couillard, M.; Yurtsever, A.; Muller, D. A.
2010-05-01
Waveguide electromagnetic modes excited by swift electrons traversing Si slabs at normal and oblique incidence are analyzed using monochromated electron energy-loss spectroscopy and interpreted using a local dielectric theory that includes relativistic effects. At normal incidence, sharp spectral features in the visible/near-infrared optical domain are directly assigned to p -polarized modes. When the specimen is tilted, s -polarized modes, which are completely absent at normal incidence, become visible in the loss spectra. In the tilted configuration, the dispersion of p -polarized modes is also modified. For tilt angles higher than ˜50° , Cherenkov radiation, the phenomenon responsible for the excitation of waveguide modes, is expected to partially escape the silicon slab and the influence of this effect on experimental measurements is discussed. Finally, we find evidence for an interference effect at parallel Si/SiO2 interfaces, as well as a delocalized excitation of guided Cherenkov modes.
Parallel regulation of feedforward inhibition and excitation during whisker map plasticity
House, David RC; Elstrott, Justin; Koh, Eileen; Chung, Jason; Feldman, Daniel E.
2011-01-01
Sensory experience drives robust plasticity of sensory maps in cerebral cortex, but the role of inhibitory circuits in this process is not fully understood. We show that classical deprivation-induced whisker map plasticity in layer 2/3 (L2/3) of rat somatosensory (S1) cortex involves robust weakening of L4-L2/3 feedforward inhibition. This weakening was caused by reduced L4 excitation onto L2/3 fast-spiking (FS) interneurons, which mediate sensitive feedforward inhibition, and was partially offset by strengthening of unitary FS to L2/3 pyramidal cell synapses. Weakening of feedforward inhibition paralleled the known weakening of feedforward excitation, so that mean excitatory-inhibitory balance and timing onto L2/3 pyramidal cells were preserved. Thus, reduced feedforward inhibition is a covert compensatory process that can maintain excitatory-inhibitory balance during classical deprivation-induced Hebbian map plasticity. PMID:22153377
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
NASA Astrophysics Data System (ADS)
Kobayashi, Kiyoshi; Suzuki, Tohru S.
2018-03-01
A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, J.T.
1993-10-01
This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less
General phase regularized reconstruction using phase cycling.
Ong, Frank; Cheng, Joseph Y; Lustig, Michael
2018-07-01
To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Growth Responses of Neurospora crassa to Increased Partial Pressures of the Noble Gases and Nitrogen
Buchheit, R. G.; Schreiner, H. R.; Doebbler, G. F.
1966-01-01
Buchheit, R. G. (Union Carbide Corp., Tonawanda, N.Y.), H. R. Schreiner, and G. F. Doebbler. Growth responses of Neurospora crassa to increased partial pressures of the noble gases and nitrogen. J. Bacteriol. 91:622–627. 1966.—Growth rate of the fungus Neurospora crassa depends in part on the nature of metabolically “inert gas” present in its environment. At high partial pressures, the noble gas elements (helium, neon, argon, krypton, and xenon) inhibit growth in the order: Xe > Kr> Ar ≫ Ne ≫ He. Nitrogen (N2) closely resembles He in inhibitory effectiveness. Partial pressures required for 50% inhibition of growth were: Xe (0.8 atm), Kr (1.6 atm), Ar (3.8 atm), Ne (35 atm), and He (∼ 300 atm). With respect to inhibition of growth, the noble gases and N2 differ qualitatively and quantitatively from the order of effectiveness found with other biological effects, i.e., narcosis, inhibition of insect development, depression of O2-dependent radiation sensitivity, and effects on tissue-slice glycolysis and respiration. Partial pressures giving 50% inhibition of N. crassa growth parallel various physical properties (i.e., solubilities, solubility ratios, etc.) of the noble gases. Linear correlation of 50% inhibition pressures to the polarizability and of the logarithm of pressure to the first and second ionization potentials suggests the involvement of weak intermolecular interactions or charge-transfer in the biological activity of the noble gases. PMID:5883104
Mbah, Henry; Negedu-Momoh, Olubunmi Ruth; Adedokun, Oluwasanmi; Ikani, Patrick Anibbe; Balogun, Oluseyi; Sanwo, Olusola; Ochei, Kingsley; Ekanem, Maurice; Torpey, Kwasi
2014-01-01
The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration. A quantitative before-and-after study conducted in 122 Family Health International (FHI360) supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration) for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and <25% NO integration). Weaknesses were noted and addressed. We analyzed 9 (7.4%) primary, 104 (85.2%) secondary and 9 (7.4%) tertiary level facilities. There were statistically significant differences in integration levels between baseline and 3 months follow-up period (p<0.01). Baseline median total integration score was 4 (IQR 3 to 5) compared to 7 (IQR 4 to 9) at 3 months follow-up (p = 0.000). Partial and fully integrated laboratory systems were 64 (52.5%) and 0 (0.0%) at baseline, compared to 100 (82.0%) and 3 (2.4%) respectively at 3 months follow-up (p = 0.000). This project showcases our novel approach to measure the status of each laboratory on the integration continuum.
NASA Astrophysics Data System (ADS)
Zibner, F.; Fornaroli, C.; Holtkamp, J.; Shachaf, Lior; Kaplan, Natan; Gillner, A.
2017-08-01
High-precision laser micro machining gains more importance in industrial applications every month. Optical systems like the helical optics offer highest quality together with controllable and adjustable drilling geometry, thus as taper angle, aspect ratio and heat effected zone. The helical optics is based on a rotating Dove-prism which is mounted in a hollow shaft engine together with other optical elements like wedge prisms and plane plates. Although the achieved quality can be interpreted as extremely high the low process efficiency is a main reason that this manufacturing technology has only limited demand within the industrial market. The objective of the research studies presented in this paper is to dramatically increase process efficiency as well as process flexibility. During the last years, the average power of commercial ultra-short pulsed laser sources has increased significantly. The efficient utilization of the high average laser power in the field of material processing requires an effective distribution of the laser power onto the work piece. One approach to increase the efficiency is the application of beam splitting devices to enable parallel processing. Multi beam processing is used to parallelize the fabrication of periodic structures as most application only require a partial amount of the emitted ultra-short pulsed laser power. In order to achieve highest flexibility while using multi beam processing the single beams are diverted and re-guided in a way that enables the opportunity to process with each partial beam on locally apart probes or semimanufactures.
NASA Astrophysics Data System (ADS)
Jones, Brendon R.; Brouwers, Luke B.; Dippenaar, Matthys A.
2018-05-01
Fractures are both rough and irregular but can be expressed by a simple model concept of two smooth parallel plates and the associated cubic law governing discharge through saturated fractures. However, in natural conditions and in the intermediate vadose zone, these assumptions are likely violated. This paper presents a qualitative experimental study investigating the cubic law under variable saturation in initially dry free-draining discrete fractures. The study comprised flow visualisation experiments conducted on transparent replicas of smooth parallel plates with inlet conditions of constant pressure and differing flow rates over both vertical and horizontal inclination. Flow conditions were altered to investigate the influence of intermittent and continuous influx scenarios. Findings from this research proved, for instance, that saturated laminar flow is not likely achieved, especially in nonhorizontal fractures. In vertical fractures, preferential flow occupies the minority of cross-sectional area despite the water supply. Movement of water through the fractured vadose zone therefore becomes a matter of the continuity principle, whereby water should theoretically be transported downward at significantly higher flow rates given the very low degree of water saturation. Current techniques that aim to quantify discrete fracture flow, notably at partial saturation, are questionable. Inspired by the results of this study, it is therefore hypothetically improbable to achieve saturation in vertical fractures under free-draining wetting conditions. It does become possible under extremely excessive water inflows or when not free-draining; however, the converse is not true, as a wet vertical fracture can be drained.
Nakamura, Kaori; Iwakabe, Shigeru
2018-03-01
The present study constructed a preliminary process model of corrective emotional experience (CEE) in an integrative affect-focused therapy. Task analysis was used to analyse 6 in-session events taken from 6 Japanese clients who worked with an integrative affect-focused therapist. The 6 events included 3 successful CEEs and 3 partially successful CEEs for comparison. A rational-empirical model of CEE was generated, which consisted of two parallel client change processes, intrapersonal change and interpersonal change, and the therapist interventions corresponding to each process. Therapist experiential interventions and therapist affirmation facilitated both intrapersonal and interpersonal change processes, whereas his relational interventions were associated with the interpersonal change process. The partially successful CEEs were differentiated by the absence of the component of core painful emotions or negative beliefs in intrapersonal change process, which seemed crucial for the interpersonal change process to develop. CEE is best represented by a preliminary model that depicts two parallel yet interacting change processes. Intrapersonal change process is similar to the sequence of change described by the emotional processing model (Pascual-Leone & Greenberg, ), whereas interpersonal change process is a unique contribution of this study. Interpersonal change process was facilitated when the therapist's active stance and use of immediacy responses to make their relational process explicit allowed a shared exploration. Therapist affirmation bridged intrapersonal change to interpersonal change by promoting an adaptive sense of self in clients and forging a deeper emotional connection between the two. Copyright © 2017 John Wiley & Sons, Ltd.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Loodts, V; Trevelyan, P M J; Rongy, L; De Wit, A
2016-10-01
Various spatial density profiles can develop in partially miscible stratifications when a phase A dissolves with a finite solubility into a host phase containing a dissolved reactant B. We investigate theoretically the impact of an A+B→C reaction on such density profiles in the host phase and classify them in a parameter space spanned by the ratios of relative contributions to density and diffusion coefficients of the chemical species. While the density profile is either monotonically increasing or decreasing in the nonreactive case, reactions combined with differential diffusivity can create eight different types of density profiles featuring up to two extrema in density, at the reaction front or below it. We use this framework to predict various possible hydrodynamic instability scenarios inducing buoyancy-driven convection around such reaction fronts when they propagate parallel to the gravity field.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
On the transmission of partial information: inferences from movement-related brain potentials
NASA Technical Reports Server (NTRS)
Osman, A.; Bashore, T. R.; Coles, M. G.; Donchin, E.; Meyer, D. E.
1992-01-01
Results are reported from a new paradigm that uses movement-related brain potentials to detect response preparation based on partial information. The paradigm uses a hybrid choice-reaction go/nogo procedure in which decisions about response hand and whether to respond are based on separate stimulus attributes. A lateral asymmetry in the movement-related brain potential was found on nogo trials without overt movement. The direction of this asymmetry depended primarily on the signaled response hand rather than on properties of the stimulus. When the asymmetry first appeared was influenced by the time required to select the signaled hand, and when it began to differ on go and nogo trials was influenced by the time to decide whether to respond. These findings indicate that both stimulus attributes were processed in parallel and that the asymmetry reflected preparation of the response hand that began before the go/nogo decision was completed.
Holographic illuminator for synchrotron-based projection lithography systems
Naulleau, Patrick P.
2005-08-09
The effective coherence of a synchrotron beam line can be tailored to projection lithography requirements by employing a moving holographic diffuser and a stationary low-cost spherical mirror. The invention is particularly suited for use in an illuminator device for an optical image processing system requiring partially coherent illumination. The illuminator includes: (1) a synchrotron source of coherent or partially coherent radiation which has an intrinsic coherence that is higher than the desired coherence, (2) a holographic diffuser having a surface that receives incident radiation from said source, (3) means for translating the surface of the holographic diffuser in two dimensions along a plane that is parallel to the surface of the holographic diffuser wherein the rate of the motion is fast relative to integration time of said image processing system; and (4) a condenser optic that re-images the surface of the holographic diffuser to the entrance plane of said image processing system.
The collisional drift mode in a partially ionized plasma. [in the F region
NASA Technical Reports Server (NTRS)
Hudson, M. K.; Kennel, C. F.
1974-01-01
The structure of the drift instability was examined in several density regimes. Let sub e be the total electron mean free path, k sub z the wave-vector component along the magnetic field, and the ratio of perpendicular ion diffusion to parallel electron streaming rates. At low densities (k sub z lambda 1) the drift mode is isothermal and should be treated kineticly. In the finite heat conduction regime square root of m/M k sub z Lambda sub 1) the drift instability threshold is reduced at low densities and increased at high densities as compared to the isothermal threshold. Finally, in the energy transfer limit (k sub z kambda sub e square root of m/M) the drift instability behaves adiabatically in a fully ionized plasma and isothermally in a partially ionized plasma for an ion-neutral to Coulomb collision frequency ratio.
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1994-01-01
Work done in the period Jan. - June 1994 is summarized. Two threads of research have been followed. First, the thermodynamics approach was used to model the chemical and mechanical responses of composites exposed to high temperatures. The thermodynamics approach lends itself easily to the usage of variational principles. This thermodynamic-variational approach has been applied to the transpiration cooling problem. The second thread is the development of a better algorithm to solve the governing equations resulting from the modeling. Explicit finite difference method is explored for solving the governing nonlinear, partial differential equations. The method allows detailed material models to be included and solution on massively parallel supercomputers. To demonstrate the feasibility of the explicit scheme in solving nonlinear partial differential equations, a transpiration cooling problem was solved. Some interesting transient behaviors were captured such as stress waves and small spatial oscillations of transient pressure distribution.
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
NASA Astrophysics Data System (ADS)
Lyan, Oleg; Jankunas, Valdas; Guseinoviene, Eleonora; Pašilis, Aleksas; Senulis, Audrius; Knolis, Audrius; Kurt, Erol
2018-02-01
In this study, a permanent magnet synchronous generator (PMSG) topology with compensated reactance windings in parallel rod configuration is proposed to reduce the armature reactance X L and to achieve higher efficiency of PMSG. The PMSG was designed using iron-cored bifilar coil topology to overcome problems of market-dominant rotary type generators. Often the problem is a comparatively high armature reactance X L, which is usually bigger than armature resistance R a. Therefore, the topology is proposed to partially compensate or negligibly reduce the PMSG reactance. The study was performed by using finite element method (FEM) analysis and experimental investigation. FEM analysis was used to investigate magnetic field flux distribution and density in PMSG. The PMSG experimental analyses of no-load losses and electromotive force versus frequency (i.e., speed) was performed. Also terminal voltage, power output and efficiency relation with load current at different frequencies have been evaluated. The reactance of PMSG has low value and a linear relation with operating frequency. The low reactance gives a small variation of efficiency (from 90% to 95%) in a wide range of load (from 3 A to 10 A) and operation frequency (from 44 Hz to 114 Hz). The comparison of PMSG characteristics with parallel and series winding connection showed insignificant power variation. The research results showed that compensated reactance winding in parallel rod configuration in PMSG design provides lower reactance and therefore, higher efficiency under wider load and frequency variation.
The neural basis of parallel saccade programming: an fMRI study.
Hu, Yanbo; Walker, Robin
2011-11-01
The neural basis of parallel saccade programming was examined in an event-related fMRI study using a variation of the double-step saccade paradigm. Two double-step conditions were used: one enabled the second saccade to be partially programmed in parallel with the first saccade while in a second condition both saccades had to be prepared serially. The intersaccadic interval, observed in the parallel programming (PP) condition, was significantly reduced compared with latency in the serial programming (SP) condition and also to the latency of single saccades in control conditions. The fMRI analysis revealed greater activity (BOLD response) in the frontal and parietal eye fields for the PP condition compared with the SP double-step condition and when compared with the single-saccade control conditions. By contrast, activity in the supplementary eye fields was greater for the double-step condition than the single-step condition but did not distinguish between the PP and SP requirements. The role of the frontal eye fields in PP may be related to the advanced temporal preparation and increased salience of the second saccade goal that may mediate activity in other downstream structures, such as the superior colliculus. The parietal lobes may be involved in the preparation for spatial remapping, which is required in double-step conditions. The supplementary eye fields appear to have a more general role in planning saccade sequences that may be related to error monitoring and the control over the execution of the correct sequence of responses.
Biodynamic feedback training to assure learning partial load bearing on forearm crutches.
Krause, Daniel; Wünnemann, Martin; Erlmann, Andre; Hölzchen, Timo; Mull, Melanie; Olivier, Norbert; Jöllenbeck, Thomas
2007-07-01
To examine how biodynamic feedback training affects the learning of prescribed partial load bearing (200N). Three pre-post experiments. Biomechanics laboratory in a German university. A volunteer sample of 98 uninjured subjects who had not used crutches recently. There were 24 subjects in experiment 1 (mean age, 23.2y); 64 in experiment 2 (mean age, 43.6y); and 10 in experiment 3 (mean age, 40.3y), parallelized by arm force. Video instruction and feedback training: In experiment 1, 2 varied instruction videos and reduced feedback frequency; in experiment 2, varied frequencies of changing tasks (contextual interference); and in experiment 3, feedback training (walking) and transfer (stair tasks). Vertical ground reaction force. Absolute error of practiced tasks was significantly reduced for all samples (P<.050). Varied contextual interference conditions did not significantly affect retention (P=.798) or transfer (P=.897). Positive transfer between tasks was significant in experiment 2 (P<.001) and was contrary to findings in experiment 3 (P=.071). Biodynamic feedback training is applicable for learning prescribed partial load bearing. The frequency of changing tasks is irrelevant. Despite some support for transfer effects, additional practice in climbing and descending stairs might be beneficial.
Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lifflander, Jonathan; Meneses, Esteban; Menon, Harshita
2014-09-22
Deterministic replay of a parallel application is commonly used for discovering bugs or to recover from a hard fault with message-logging fault tolerance. For message passing programs, a major source of overhead during forward execution is recording the order in which messages are sent and received. During replay, this ordering must be used to deterministically reproduce the execution. Previous work in replay algorithms often makes minimal assumptions about the programming model and application in order to maintain generality. However, in many cases, only a partial order must be recorded due to determinism intrinsic in the code, ordering constraints imposed bymore » the execution model, and events that are commutative (their relative execution order during replay does not need to be reproduced exactly). In this paper, we present a novel algebraic framework for reasoning about the minimum dependencies required to represent the partial order for different concurrent orderings and interleavings. By exploiting this theory, we improve on an existing scalable message-logging fault tolerance scheme. The improved scheme scales to 131,072 cores on an IBM BlueGene/P with up to 2x lower overhead than one that records a total order.« less
A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection
NASA Technical Reports Server (NTRS)
Buell, Jeffrey C.
1988-01-01
A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.
Compact laser amplifier system
Carr, R.B.
1974-02-26
A compact laser amplifier system is described in which a plurality of face-pumped annular disks, aligned along a common axis, independently radially amplify a stimulating light pulse. Partially reflective or lasing means, coaxially positioned at the center of each annualar disk, radially deflects a stimulating light directed down the common axis uniformly into each disk for amplification, such that the light is amplified by the disks in a parallel manner. Circumferential reflecting means coaxially disposed around each disk directs amplified light emission, either toward a common point or in a common direction. (Official Gazette)
Negative Compressibility and Inverse Problem for Spinning Gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasily Geyko and Nathaniel J. Fisch
2013-01-11
A spinning ideal gas in a cylinder with a smooth surface is shown to have unusual properties. First, under compression parallel to the axis of rotation, the spinning gas exhibits negative compressibility because energy can be stored in the rotation. Second, the spinning breaks the symmetry under which partial pressures of a mixture of gases simply add proportional to the constituent number densities. Thus, remarkably, in a mixture of spinning gases, an inverse problem can be formulated such that the gas constituents can be determined through external measurements only.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
l-Glutamine as a Substrate for l-Asparaginase from Serratia marcescens
Novak, Edward K.; Phillips, Arthur W.
1974-01-01
l-Asparaginase from Serratia marcescens was found to hydrolyze l-glutamine at 5% of the rate of l-asparagine hydrolysis. The ratio of the two activities did not change through several stages of purification, anionic and cationic polyacrylamide disk gel electrophoresis, and partial thermal inactivation. The two activities had parallel blood clearance rates in mice. l-glutamine was found to be a competitive inhibitor of l-asparagine hydrolysis. A separate l-glutaminase enzyme free of l-asparaginase activity was separated by diethylaminoethyl-cellulose chromatography. PMID:4590479
Nondestructive evaluation of plasma-sprayed thermal barrier coatings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, D.J.; Taylor, J.A.T.
Acoustic emission has been used as a nondestructive evaluation technique to examine the thermal shock response of thermal barrier coatings. In this study, samples of partially stabilized zirconia powder were sprayed and acoustic emission (AE) data were taken in a series of thermal shock tests in an effort to correlate AE with a given failure mechanism. Microstructural evidence was examined using parallel beam x-ray diffraction and optical microscopy. The AE data are discussed in terms of cumulative amplitude distributions and the use of this technique to characterize fracture events.
Stability of an oscillating boundary layer
NASA Technical Reports Server (NTRS)
Levchenko, V. Y.; Solovyev, A. S.
1985-01-01
Levchenko and Solov'ev (1972, 1974) have developed a stability theory for space periodic flows, assuming that the Floquet theory is applicable to partial differential equations. In the present paper, this approach is extended to unsteady periodic flows. A complete unsteady formulation of the stability problem is obtained, and the stability characteristics over an oscillating period are determined from the solution of the problem. Calculations carried out for an oscillating incompressible boundary layer on a plate showed that the boundary layer flow may be regarded as a locally parallel flow.
Multimodal technique to eliminate humidity interference for specific detection of ethanol.
Jalal, Ahmed Hasnain; Umasankar, Yogeswaran; Gonzalez, Pablo J; Alfonso, Alejandro; Bhansali, Shekhar
2017-01-15
Multimodal electrochemical technique incorporating both open circuit potential (OCP) and amperometric techniques have been conceptualized and implemented to improve the detection of specific analyte in systems where more than one analyte is present. This approach has been demonstrated through the detection of ethanol while eliminating the contribution of water in a micro fuel cell sensor system. The sensor was interfaced with LMP91000 potentiostat, controlled through MSP430F5529LP microcontroller to implement an auto-calibration algorithm tailored to improve the detection of alcohol. The sensor was designed and fabricated as a three electrode system with Nafion as a proton exchange membrane (PEM). The electrochemical signal of the interfering phase (water) was eliminated by implementing the multimodal electrochemical detection technique. The results were validated by comparing sensor and potentiostat performances with a commercial sensor and potentiostat respectively. The results suggest that such a sensing system can detect ethanol at concentrations as low as 5ppm. The structure and properties such as low detection limit, selectivity and miniaturized size enables potential application of this device in wearable transdermal alcohol measurements. Copyright © 2016 Elsevier B.V. All rights reserved.
High resolution time interval counter
Condreva, Kenneth J.
1994-01-01
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.
High resolution time interval counter
Condreva, K.J.
1994-07-26
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.
Autocalibration of a projector-camera system.
Okatani, Takayuki; Deguchi, Koichiro
2005-12-01
This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.
The cognitive architecture for chaining of two mental operations.
Sackur, Jérôme; Dehaene, Stanislas
2009-05-01
A simple view, which dates back to Turing, proposes that complex cognitive operations are composed of serially arranged elementary operations, each passing intermediate results to the next. However, whether and how such serial processing is achieved with a brain composed of massively parallel processors, remains an open question. Here, we study the cognitive architecture for chained operations with an elementary arithmetic algorithm: we required participants to add (or subtract) two to a digit, and then compare the result with five. In four experiments, we probed the internal implementation of this task with chronometric analysis, the cued-response method, the priming method, and a subliminal forced-choice procedure. We found evidence for an approximately sequential processing, with an important qualification: the second operation in the algorithm appears to start before completion of the first operation. Furthermore, initially the second operation takes as input the stimulus number rather than the output of the first operation. Thus, operations that should be processed serially are in fact executed partially in parallel. Furthermore, although each elementary operation can proceed subliminally, their chaining does not occur in the absence of conscious perception. Overall, the results suggest that chaining is slow, effortful, imperfect (resulting partly in parallel rather than serial execution) and dependent on conscious control.
Nabuurs, Sanne M; Westphal, Adrie H; aan den Toorn, Marije; Lindhoud, Simon; van Mierlo, Carlo P M
2009-06-17
Partially folded protein species transiently exist during folding of most proteins. Often these species are molten globules, which may be on- or off-pathway to native protein. Molten globules have a substantial amount of secondary structure but lack virtually all the tertiary side-chain packing characteristic of natively folded proteins. These ensembles of interconverting conformers are prone to aggregation and potentially play a role in numerous devastating pathologies, and thus attract considerable attention. The molten globule that is observed during folding of apoflavodoxin from Azotobacter vinelandii is off-pathway, as it has to unfold before native protein can be formed. Here we report that this species can be trapped under nativelike conditions by substituting amino acid residue F44 by Y44, allowing spectroscopic characterization of its conformation. Whereas native apoflavodoxin contains a parallel beta-sheet surrounded by alpha-helices (i.e., the flavodoxin-like or alpha-beta parallel topology), it is shown that the molten globule has a totally different topology: it is helical and contains no beta-sheet. The presence of this remarkably nonnative species shows that single polypeptide sequences can code for distinct folds that swap upon changing conditions. Topological switching between unrelated protein structures is likely a general phenomenon in the protein structure universe.
A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush
1997-01-01
Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
Mahmood, Zohaib; McDaniel, Patrick; Guérin, Bastien; Keil, Boris; Vester, Markus; Adalsteinsson, Elfar; Wald, Lawrence L; Daniel, Luca
2016-07-01
In a coupled parallel transmit (pTx) array, the power delivered to a channel is partially distributed to other channels because of coupling. This power is dissipated in circulators resulting in a significant reduction in power efficiency. In this study, a technique for designing robust decoupling matrices interfaced between the RF amplifiers and the coils is proposed. The decoupling matrices ensure that most forward power is delivered to the load without loss of encoding capabilities of the pTx array. The decoupling condition requires that the impedance matrix seen by the power amplifiers is a diagonal matrix whose entries match the characteristic impedance of the power amplifiers. In this work, the impedance matrix of the coupled coils is diagonalized by a successive multiplication by its eigenvectors. A general design procedure and software are developed to generate automatically the hardware that implements diagonalization using passive components. The general design method is demonstrated by decoupling two example parallel transmit arrays. Our decoupling matrices achieve better than -20 db decoupling in both cases. A robust framework for designing decoupling matrices for pTx arrays is presented and validated. The proposed decoupling strategy theoretically scales to any arbitrary number of channels. Magn Reson Med 76:329-339, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
FBIS report. Science and technology: Europe/International, March 29, 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-29
;Partial Contents: Advanced Materials (EU Project to Improve Production in Metal Matrix Compounds Noted, Germany: Extremely Hard Carbon Coating Development, Italy: Director of CNR Metallic Materials Institute Interviewed); Aerospace (ESA Considers Delays, Reductions as Result of Budget Cuts, Italy: Space Agency`s Director on Restructuring, Future Plans); Automotive, Transportation (EU: Clean Diesel Engine Technology Research Reviewed); Biotechnology (Germany`s Problems, Successes in Biotechnology Discussed); Computers (EU Europort Parallel Computing Project Concluded, Italy: PQE 2000 Project on Massively Parallel Systems Viewed); Defense R&D (France: Future Tasks of `Brevel` Military Intelligence Drone Noted); Energy, Environment (German Scientist Tests Elimination of Phosphates); Advanced Manufacturing (France:more » Advanced Rapid Prototyping System Presented); Lasers, Sensors, Optics (France: Strategy of Cilas Laser Company Detailed); Microelectronics (France: Simulation Company to Develop Microelectronic Manufacturing Application); Nuclear R&D (France: Megajoule Laser Plan, Cooperation with Livermore Lab Noted); S&T Policy (EU Efforts to Aid Small Companies` Research Viewed); Telecommunications (France Telecom`s Way to Internet).« less
NASA Astrophysics Data System (ADS)
Tropp, James; Lupo, Janine M.; Chen, Albert; Calderon, Paul; McCune, Don; Grafendorfer, Thomas; Ozturk-Isik, Esin; Larson, Peder E. Z.; Hu, Simon; Yen, Yi-Fen; Robb, Fraser; Bok, Robert; Schulte, Rolf; Xu, Duan; Hurd, Ralph; Vigneron, Daniel; Nelson, Sarah
2011-01-01
We report metabolic images of 13C, following injection of a bolus of hyperpolarized [1-13C] pyruvate in a live rat. The data were acquired on a clinical scanner, using custom coils for volume transmission and array reception. Proton blocking of all carbon resonators enabled proton anatomic imaging with the system body coil, to allow for registration of anatomic and metabolic images, for which good correlation was achieved, with some anatomic features (kidney and heart) clearly visible in a carbon image, without reference to the corresponding proton image. Parallel imaging with sensitivity encoding was used to increase the spatial resolution in the SI direction of the rat. The signal to noise ratio in was in some instances unexpectedly high in the parallel images; variability of the polarization among different trials, plus partial volume effects, are noted as a possible cause of this.
["Dual Guidance"? - parallel combination of ultrasound-guidance and nerve stimulation - Contra].
Maecken, Tim
2015-07-01
Sonography is a highly user-dependent technology. It presupposes a considerable degree of sonoanatomic and sonographic knowledge and requires good practical skills of the examiner. Sonography allows the identification of the puncture target, observes the needle feed and assesses the spread pattern of the local anesthetic in real time. Peripheral electrical nerve stimulation (PNS) cannot offer these advantages to the same degree, but may allow nerve localization under difficult sonographic conditions. The combination of the two locating techniques is complex in its practical implementation. Partially, the use of one location technique is made even more difficult by the combination with the second. PNS in parallel to sonography serves primarily as a warning technology in the case of an invisible cannula tip. It should not be construed as a compensation technique for the lack of sonographic skills or knowledge. However, PNS may be helpful in the sense of a bridging technology as long as the user is aware of its limitations. © Georg Thieme Verlag Stuttgart · New York.
Pretest predictions for degraded shutdown heat-removal tests in THORS-SHRS Assembly 1. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, S.D.; Carbajo, J.J.
The recent modification of the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility at ORNL will allow testing of parallel simulated fuel assemblies under natural-convection and low-flow forced-convection conditions similar to those that might occur during a partial failure of the Shutdown Heat Removal System (SHRS) of an LMFBR. An extensive test program has been prepared and testing will be started in September 1983. THORS-SHRS Assembly 1 consists of two 19-pin bundles in parallel with a third leg serving as a bypass line and containing a sodium-to-sodium intermediate heat exchanger. Testing at low powers wil help indicate the maximum amount of heat thatmore » can be removed from the reactor core during conditions of degraded shutdown heat removal. The thermal-hydraulic behavior of the test bundles will be characterized for single-phase and two-phase conditions up to dryout. The influence of interassembly flow redistribution including transients from forced- to natural-convection conditions will be investigated during testing.« less
Cukur, Cem Safak; de Guzman, Maria Rosario T; Carlo, Gustavo
2004-12-01
The authors examined the links between two dimensions that have been useful in understanding cross-cultural differences and similarities, namely, individualism-collectivism (I-C) and value orientations. The authors examined the relations and parallels between the two variables by directly relating them and examining the patterns of relations that both have with a third variable, religiosity. Participants were 475 college students from the Philippines, the United States, and Turkey who responded to measures of horizontal and vertical I-C, value orientations, and religiosity. The authors found partial support for the parallels between I-C and value types, particularly for collectivism and conservative values. Moreover, religiosity was associated positively with conservative values and collectivism, across all three cultures. The authors found individualism to also relate to openness-to-change values, though the patterns were not as consistent as those that they found between collectivism and conservation. Differences and similarities emerged in links of I-C-values to religiosity across the three samples.
Embedded cluster metal-polymeric micro interface and process for producing the same
Menezes, Marlon E.; Birnbaum, Howard K.; Robertson, Ian M.
2002-01-29
A micro interface between a polymeric layer and a metal layer includes isolated clusters of metal partially embedded in the polymeric layer. The exposed portion of the clusters is smaller than embedded portions, so that a cross section, taken parallel to the interface, of an exposed portion of an individual cluster is smaller than a cross section, taken parallel to the interface, of an embedded portion of the individual cluster. At least half, but not all of the height of a preferred spherical cluster is embedded. The metal layer is completed by a continuous layer of metal bonded to the exposed portions of the discontinuous clusters. The micro interface is formed by heating a polymeric layer to a temperature, near its glass transition temperature, sufficient to allow penetration of the layer by metal clusters, after isolated clusters have been deposited on the layer at lower temperatures. The layer is recooled after embedding, and a continuous metal layer is deposited upon the polymeric layer to bond with the discontinuous metal clusters.
Final Report: Subcontract B623868 Algebraic Multigrid solvers for coupled PDE systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brannick, J.
The Pennsylvania State University (“Subcontractor”) continued to work on the design of algebraic multigrid solvers for coupled systems of partial differential equations (PDEs) arising in numerical modeling of various applications, with a main focus on solving the Dirac equation arising in Quantum Chromodynamics (QCD). The goal of the proposed work was to develop combined geometric and algebraic multilevel solvers that are robust and lend themselves to efficient implementation on massively parallel heterogeneous computers for these QCD systems. The research in these areas built on previous works, focusing on the following three topics: (1) the development of parallel full-multigrid (PFMG) andmore » non-Galerkin coarsening techniques in this frame work for solving the Wilson Dirac system; (2) the use of these same Wilson MG solvers for preconditioning the Overlap and Domain Wall formulations of the Dirac equation; and (3) the design and analysis of algebraic coarsening algorithms for coupled PDE systems including Stokes equation, Maxwell equation and linear elasticity.« less
Davies, M A
2015-10-01
Salicylic acid (SA) is a widely used active in anti-acne face wash products. Only about 1-2% of the total dose is actually deposited on skin during washing, and more efficient deposition systems are sought. The objective of this work was to develop an improved method, including data analysis, to measure deposition of SA from wash-off formulae. Full fluorescence excitation-emission matrices (EEMs) were acquired for non-invasive measurement of deposition of SA from wash-off products. Multivariate data analysis methods - parallel factor analysis and N-way partial least-squares regression - were used to develop and compare deposition models on human volunteers and porcine skin. Although both models are useful, there are differences between them. First, the range of linear response to dosages of SA was 60 μg cm(-2) in vivo compared to 25 μg cm(-2) on porcine skin. Second, the actual shape of the SA band was different between substrates. The methods employed in this work highlight the utility of the use of EEMs, in conjunction with multivariate analysis tools such as parallel factor analysis and multiway partial least-squares calibration, in determining sources of spectral variability in skin and quantification of exogenous species deposited on skin. The human model exhibited the widest range of linearity, but porcine model is still useful up to deposition levels of 25 μg cm(-2) or used with nonlinear calibration models. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
2015-01-01
This article reviews the topic of how to identify and develop a removable partial denture (RPD) path of placement, and provides a literature review of the concept of the RPD path of placement, also known as the path of insertion. An optimal RPD path of placement, guided by mutually parallel guide planes, ensures that the RPD flanges fit intimately over edentulous ridge structures and that the framework fits intimately with guide plane surfaces, which prevents food collecting empty spaces between the intaglio surface of the framework and intraoral surfaces, and ensures that RPD clasps engage adequate numbers of tooth undercuts to ensure RPD retention. The article covers topics such as the causes of obstructions to RPD intra-oral seating, the causes of food collecting empty spaces that may exist around an RPD, and how to identify if a guide plane is parallel with the projected RPD path of placement. The article presents a method of using a surgical operating microscope, or high magnification (6-8x or greater) binocular surgical loupes telescopes, combined with co-axial illumination, to identify a preliminary path of placement for an arch. This preliminary path of placement concept may help to guide a dentist or a dental laboratory technician when surveying a master cast of the arch to develop an RPD path of placement, or in verifying that intra-oral contouring has aligned teeth surfaces optimally with the RPD path of placement. In dentistry, a well-fitting RPD reduces long-term periodontal or structural damage to abutment teeth. PMID:25722842
Tegeler, Catherine L; Gerdes, Lee; Shaltout, Hossam A; Cook, Jared F; Simpson, Sean L; Lee, Sung W; Tegeler, Charles H
2017-12-22
Military-related post-traumatic stress (PTS) is associated with numerous symptom clusters and diminished autonomic cardiovascular regulation. High-resolution, relational, resonance-based, electroencephalic mirroring (HIRREM®) is a noninvasive, closed-loop, allostatic, acoustic stimulation neurotechnology that produces real-time translation of dominant brain frequencies into audible tones of variable pitch and timing to support the auto-calibration of neural oscillations. We report clinical, autonomic, and functional effects after the use of HIRREM® for symptoms of military-related PTS. Eighteen service members or recent veterans (15 active-duty, 3 veterans, most from special operations, 1 female), with a mean age of 40.9 (SD = 6.9) years and symptoms of PTS lasting from 1 to 25 years, undertook 19.5 (SD = 1.1) sessions over 12 days. Inventories for symptoms of PTS (Posttraumatic Stress Disorder Checklist - Military version, PCL-M), insomnia (Insomnia Severity Index, ISI), depression (Center for Epidemiologic Studies Depression Scale, CES-D), and anxiety (Generalized Anxiety Disorder 7-item scale, GAD-7) were collected before (Visit 1, V1), immediately after (Visit 2, V2), and at 1 month (Visit 3, V3), 3 (Visit 4, V4), and 6 (Visit 5, V5) months after intervention completion. Other measures only taken at V1 and V2 included blood pressure and heart rate recordings to analyze heart rate variability (HRV) and baroreflex sensitivity (BRS), functional performance (reaction and grip strength) testing, blood and saliva for biomarkers of stress and inflammation, and blood for epigenetic testing. Paired t-tests, Wilcoxon signed-rank tests, and a repeated-measures ANOVA were performed. Clinically relevant, significant reductions in all symptom scores were observed at V2, with durability through V5. There were significant improvements in multiple measures of HRV and BRS [Standard deviation of the normal beat to normal beat interval (SDNN), root mean square of the successive differences (rMSSD), high frequency (HF), low frequency (LF), and total power, HF alpha, sequence all, and systolic, diastolic and mean arterial pressure] as well as reaction testing. Trends were seen for improved grip strength and a reduction in C-Reactive Protein (CRP), Angiotensin II to Angiotensin 1-7 ratio and Interleukin-10, with no change in DNA n-methylation. There were no dropouts or adverse events reported. Service members or veterans showed reductions in symptomatology of PTS, insomnia, depressive mood, and anxiety that were durable through 6 months after the use of a closed-loop allostatic neurotechnology for the auto-calibration of neural oscillations. This study is the first to report increased HRV or BRS after the use of an intervention for service members or veterans with PTS. Ongoing investigations are strongly warranted. NCT03230890 , retrospectively registered July 25, 2017.
Berseth, Carol Lynn; Mitmesser, Susan Hazels; Ziegler, Ekhard E; Marunycz, John D; Vanderhoof, Jon
2009-01-01
Background Parents who perceive common infant behaviors as formula intolerance-related often switch formulas without consulting a health professional. Up to one-half of formula-fed infants experience a formula change during the first six months of life. Methods The objective of this study was to assess discontinuance due to study physician-assessed formula intolerance in healthy, term infants. Infants (335) were randomized to receive either a standard intact cow milk protein formula (INTACT) or a partially hydrolyzed cow milk protein formula (PH) in a 60 day non-inferiority trial. Discontinuance due to study physician-assessed formula intolerance was the primary outcome. Secondary outcomes included number of infants who discontinued for any reason, including parent-assessed. Results Formula intolerance between groups (INTACT, 12.3% vs. PH, 13.7%) was similar for infants who completed the study or discontinued due to study physician-assessed formula intolerance. Overall study discontinuance based on parent- vs. study physician-assessed intolerance for all infants (14.4 vs.11.1%) was significantly different (P = 0.001). Conclusion This study demonstrated no difference in infant tolerance of intact vs. partially hydrolyzed cow milk protein formulas for healthy, term infants over a 60-day feeding trial, suggesting nonstandard partially hydrolyzed formulas are not necessary as a first-choice for healthy infants. Parents frequently perceived infant behavior as formula intolerance, paralleling previous reports of unnecessary formula changes. Trial Registration clinicaltrials.gov: NCT00666120 PMID:19545360
ELsyad, Moustafa Abdou; Omran, Abdelbaset Omar; Fouad, Mohammed Mohammed
2017-01-01
The aim of this study was to evaluate and compare strain around abutment teeth with different attachments used for implant-assisted distal extension partial overdentures (IADEPODs). A mandibular Kennedy class I acrylic model (remaining teeth from first premolar to first premolar) was constructed. A conventional partial denture was constructed over the model (control, group 1). Two laboratory implants were then placed bilaterally in the first molar areas parallel to each other and perpendicular to the residual ridge. Three additional experimental partial overdentures (PODs) were constructed and connected to the implants using ball (group 2), magnetic (group 3), and Locator (group 4) attachments. Three linear strain gauges were bonded buccal, lingual, and distal to the first premolar abutment tooth at the right (loading) and the left (nonloading) sides. For each group, a universal testing device was used to apply a unilateral vertical static load (50 N) on the first molar area, and the strain was recorded using a multichannel digital strainometer. Significant differences between groups and between sites of strain gauges were detected. Strains recorded for all groups were compressive (negative) in nature. Group 1 demonstrated the highest strain, followed by group 3 and group 4; group 2 recorded the lowest strain. For group 2, the highest strain was recoded at the lingual nonloading side. For group 1, group 3, and group 4, the highest strain was recorded at the buccal loading side. Within the limitation of the present study, ball attachments used to retain IADEPODs to the implants were associated with lower strains around abutment teeth than Locator and magnetic attachments. The highest strain was recorded with conventional partial dentures. © 2015 by the American College of Prosthodontists.
NASA Technical Reports Server (NTRS)
Abtahi, Ali A. (Inventor)
1995-01-01
A radiation pyrometer for measuring the true temperature of a body is provided by detecting and measuring thermal radiation from the body based on the principle that the effects of angular emission I(sub 1) and reflection I(sub 2) on the polarization states p and s of radiation are complementary such that upon detecting the combined partial polarization state components I(sub p) =I(sub 1p) + I(sub 2p) and I(sub s)=I(sub 1s) + I(sub 2s) and adjusting the intensity of the variable radiation source of the reflected radiation I(sub 2) until the combined partial radiation components I(sub p) and I(sub s) are equal, the effects of emissivity as well as diffusivity of the surface of the body are eliminated, thus obviating the need for any post processing of brightness temperature data.
Growth of Defect-Free 3C-SiC on 4H- and 6H-SiC Mesas Using Step-Free Surface Heteroepitaxy
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Powell, J. Anthony; Trunek, Andrew J.; Huang, Xianrong R.; Dudley, Michael
2001-01-01
A new growth process, herein named step-free surface heteroepitaxy, has achieved 3CSiC films completely free of double positioning boundaries and stacking faults on 4H-SiC and 6H-SiC substrate mesas. The process is based upon the initial 2-dimensional nucleation and lateral expansion of a single island of 3C-SiC on a 4H- or 6H-SiC mesa surface that is completely free of bilayer surface steps. Our experimental results indicate that substrate-epilayer in-plane lattice mismatch (delta a/a = 0.0854% for 3C/4H) is at least partially relieved parallel to the interface in the initial bilayers of the heterofilm, producing an at least partially relaxed 3C-SiC film without dislocations that undesirably thread through the thickness of the epilayer. This result should enable realization of improved 3C-SiC devices.
Moradi, Christopher P.; Douberly, Gary E.
2015-06-22
The Stark effect is considered for polyatomic open shell complexes that exhibit partially quenched electronic angular momentum. Matrix elements of the Stark Hamiltonian represented in a parity conserving Hund's case (a) basis are derived for the most general case, in which the permanent dipole moment has projections on all three inertial axes of the system. Transition intensities are derived, again for the most general case, in which the laser polarization has projections onto axes parallel and perpendicular to the Stark electric field, and the transition dipole moment vector is projected onto all three inertial axes in the molecular frame. Asmore » a result, simulations derived from this model are compared to experimental rovibrational Stark spectra of OH-C 2H 2, OH-C 2H 4, and OH-H 2O complexes formed in helium nanodroplets.« less
Borschel, Marlene W; Choe, Yong S; Kajzer, Janice A
2014-12-01
Partially hydrolyzed formulas (pHF) represent a significant percentage of the infant formula market. A new whey-based, palm olein oil (PO)-free pHF was developed and a masked, randomized, parallel growth study was conducted in infants fed this formula or a commercially available whey-based pHF with PO. Infants between 0 and 8 days were to be enrolled and studied to 119 days of age. Growth and tolerance of infants were evaluated. Mean weight gain from 14 to 119 days of age was similar between groups. There were no significant differences between groups in weight, length, head circumference (HC), or length or HC gains. Infants fed the new PO-free pHF had significantly softer stools than those fed the PO-containing formula except at 119 days of age. This study demonstrates that whereas growth of infants fed different formulas during the first 4 months of life may be similar, infants may tolerate individual formulas differently. © The Author(s) 2014.
A single molecule perspective on the functional diversity of in vitro evolved β-glucuronidase.
Liebherr, Raphaela B; Renner, Max; Gorris, Hans H
2014-04-23
The mechanisms that drive the evolution of new enzyme activity have been investigated by comparing the kinetics of wild-type and in vitro evolved β-glucuronidase (GUS) at the single molecule level. Several hundred single GUS molecules were separated in large arrays of 62,500 ultrasmall reaction chambers etched into the surface of a fused silica slide to observe their individual substrate turnover rates in parallel by fluorescence microscopy. Individual GUS molecules feature long-lived but divergent activity states, and their mean activity is consistent with classic Michaelis-Menten kinetics. The large number of single molecule substrate turnover rates is representative of the activity distribution within an entire enzyme population. Partially evolved GUS displays a much broader activity distribution among individual enzyme molecules than wild-type GUS. The broader activity distribution indicates a functional division of work between individual molecules in a population of partially evolved enzymes that-as so-called generalists-are characterized by their promiscuous activity with many different substrates.
A.I.-based real-time support for high performance aircraft operations
NASA Technical Reports Server (NTRS)
Vidal, J. J.
1985-01-01
Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.
Period of vibration of axially vibrating truly nonlinear rod
NASA Astrophysics Data System (ADS)
Cveticanin, L.
2016-07-01
In this paper the axial vibration of a muscle whose fibers are parallel to the direction of muscle compression is investigated. The model is a clamped-free rod with a strongly nonlinear elastic property. Axial vibration is described by a nonlinear partial differential equation. A solution of the equation is constructed for special initial conditions by using the method of separation of variables. The partial differential equation is separated into two uncoupled strongly nonlinear second order differential equations. Both equations, with displacement function and with time function are exactly determined. Exact solutions are given in the form of inverse incomplete and inverse complete Beta function. Using boundary and initial conditions, the frequency of vibration is obtained. It has to be mentioned that the determined frequency represents the exact analytic description for the axially vibrating truly nonlinear clamped-free rod. The procedure suggested in this paper is applied for calculation of the frequency of the longissimus dorsi muscle of a cow. The influence of elasticity order and elasticity coefficient on the frequency property is tested.
Application of a Modular Particle-Continuum Method to Partially Rarefied, Hypersonic Flow
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Boyd, Iain D.
2011-05-01
The Modular Particle-Continuum (MPC) method is used to simulate partially-rarefied, hypersonic flow over a sting-mounted planetary probe configuration. This hybrid method uses computational fluid dynamics (CFD) to solve the Navier-Stokes equations in regions that are continuum, while using direct simulation Monte Carlo (DSMC) in portions of the flow that are rarefied. The MPC method uses state-based coupling to pass information between the two flow solvers and decouples both time-step and mesh densities required by each solver. It is parallelized for distributed memory systems using dynamic domain decomposition and internal energy modes can be consistently modeled to be out of equilibrium with the translational mode in both solvers. The MPC results are compared to both full DSMC and CFD predictions and available experimental measurements. By using DSMC in only regions where the flow is nonequilibrium, the MPC method is able to reproduce full DSMC results down to the level of velocity and rotational energy probability density functions while requiring a fraction of the computational time.
Tunable thermal rectification in graphene/hexagonal boron nitride hybrid structures
NASA Astrophysics Data System (ADS)
Chen, Xue-Kun; Hu, Ji-Wen; Wu, Xi-Jun; Jia, Peng; Peng, Zhi-Hua; Chen, Ke-Qiu
2018-02-01
Using non-equilibrium molecular dynamics simulations, we investigate thermal rectification (TR) in graphene/hexagonal boron nitride (h-BN) hybrid structures. Two different structural models, partially substituting graphene into h-BN (CBN) and partially substituting h-BN into graphene (BNC), are considered. It is found that CBN has a significant TR effect while that of BNC is very weak. The observed TR phenomenon can be attributed to the resonance effect between out-of-plane phonons of graphene and h-BN domains in the low-frequency region under negative temperature bias. In addition, the influences of ambient temperature, system size, defect number and substrate interaction are also studied to obtain the optimum conditions for TR. More importantly, the TR ratio could be effectively tuned through chemical and structural diversity. A moderate C/BN ratio and parallel arrangement are found to enhance the TR ratio. Detailed phonon spectra analyses are conducted to understand the thermal transport behavior. This work extends hybrid engineering to 2D materials for achieving TR.
Partial least squares based identification of Duchenne muscular dystrophy specific genes.
An, Hui-bo; Zheng, Hua-cheng; Zhang, Li; Ma, Lin; Liu, Zheng-yan
2013-11-01
Large-scale parallel gene expression analysis has provided a greater ease for investigating the underlying mechanisms of Duchenne muscular dystrophy (DMD). Previous studies typically implemented variance/regression analysis, which would be fundamentally flawed when unaccounted sources of variability in the arrays existed. Here we aim to identify genes that contribute to the pathology of DMD using partial least squares (PLS) based analysis. We carried out PLS-based analysis with two datasets downloaded from the Gene Expression Omnibus (GEO) database to identify genes contributing to the pathology of DMD. Except for the genes related to inflammation, muscle regeneration and extracellular matrix (ECM) modeling, we found some genes with high fold change, which have not been identified by previous studies, such as SRPX, GPNMB, SAT1, and LYZ. In addition, downregulation of the fatty acid metabolism pathway was found, which may be related to the progressive muscle wasting process. Our results provide a better understanding for the downstream mechanisms of DMD.
Piccirilli, Gisela N; Escandar, Graciela M
2006-09-01
This paper demonstrates for the first time the power of a chemometric second-order algorithm for predicting, in a simple way and using spectrofluorimetric data, the concentration of analytes in the presence of both the inner-filter effect and unsuspected species. The simultaneous determination of the systemic fungicides carbendazim and thiabendazole was achieved and employed for the discussion of the scopes of the applied second-order chemometric tools: parallel factor analysis (PARAFAC) and partial least-squares with residual bilinearization (PLS/RBL). The chemometric study was performed using fluorescence excitation-emission matrices obtained after the extraction of the analytes over a C18-membrane surface. The ability of PLS/RBL to recognize and overcome the significant changes produced by thiabendazole in both the excitation and emission spectra of carbendazim is demonstrated. The high performance of the selected PLS/RBL method was established with the determination of both pesticides in artificial and real samples.
Mechanism of IAPP amyloid fibril formation involves an intermediate with a transient β-sheet
Buchanan, Lauren E.; Dunkelberger, Emily B.; Tran, Huong Q.; Cheng, Pin-Nan; Chiu, Chi-Cheng; Cao, Ping; Raleigh, Daniel P.; de Pablo, Juan J.; Nowick, James S.; Zanni, Martin T.
2013-01-01
Amyloid formation is implicated in more than 20 human diseases, yet the mechanism by which fibrils form is not well understood. We use 2D infrared spectroscopy and isotope labeling to monitor the kinetics of fibril formation by human islet amyloid polypeptide (hIAPP or amylin) that is associated with type 2 diabetes. We find that an oligomeric intermediate forms during the lag phase with parallel β-sheet structure in a region that is ultimately a partially disordered loop in the fibril. We confirm the presence of this intermediate, using a set of homologous macrocyclic peptides designed to recognize β-sheets. Mutations and molecular dynamics simulations indicate that the intermediate is on pathway. Disrupting the oligomeric β-sheet to form the partially disordered loop of the fibrils creates a free energy barrier that is the origin of the lag phase during aggregation. These results help rationalize a wide range of previous fragment and mutation studies including mutations in other species that prevent the formation of amyloid plaques. PMID:24218609
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.
NASA Technical Reports Server (NTRS)
Usher, P. D.
1971-01-01
The almucantar radio telescope development and characteristics are presented. The radio telescope consists of a paraboloidal reflector free to rotate in azimuth but limited in altitude between two fixed angles from the zenith. The fixed angles are designed to provide the capability where sources lying between two small circles parallel with the horizon (almucantars) are accessible at any one instant. Basic geometrical considerations in the almucantar design are presented. The capabilities of the almucantar telescope for source counting and for monitoring which are essential to a resolution of the cosmological problem are described.
NASA Technical Reports Server (NTRS)
Bohning, O. D.; Becker, F. J.
1980-01-01
Design, fabrication and test of partially populated prototype recorder using 100 kilobit serial chips is described. Electrical interface, operating modes, and mechanical design of several module configurations are discussed. Fabrication and test of the module demonstrated the practicality of multiplexing resulting in lower power, weight, and volume. This effort resulted in the completion of a module consisting of a fully engineered printed circuit storage board populated with 5 of 8 possible cells and a wire wrapped electronics board. Interface of the module is 16 bits parallel at a maximum of 1.33 megabits per second data rate on either of two interface buses.
Kellner, Aaron; Freeman, Elizabeth B.; Carlson, Arthur S.
1958-01-01
Specific neutralizing antibodies directed against streptococcal DPNase were induced experimentally in rabbits and guinea pigs by the injection of partially purified preparations of the enzyme. Similar antibodies capable of inhibiting the biological activity of the enzyme were found to occur naturally in the serum of a very high percentage of human beings, and the titer of these antibodies often rose sharply following streptococcal infections. The antibody response to streptococcal DPNase in general paralleled that to streptolysin O, though in some instances antibodies to one increased when those to the other did not. PMID:13575667
Surgical correction of pectus arcuatum
Ershova, Ksenia; Adamyan, Ruben
2016-01-01
Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483
June and August median streamflows estimated for ungaged streams in southern Maine
Lombard, Pamela J.
2010-01-01
Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.
Mattinson, C.G.; Colgan, J.P.; Metcalf, J.R.; Miller, E.L.; Wooden, J.L.
2007-01-01
Amphibolite-facies Proterozoic metasedimentary rocks below the low-angle Ceno-zoic Boundary Canyon Detachment record deep crustal processes related to Meso-zoic crustal thickening and subsequent extension. A 91.5 ?? 1.4 Ma Th-Pb SHRIMP-RG (sensitive high-resolution ion microprobe-reverse geometry) monazite age from garnet-kyanite-staurolite schist constrains the age of prograde metamorphism in the lower plate. Between the Boundary Canyon Detachment and the structurally deeper, subparallel Monarch Spring fault, prograde metamorphic fabrics are overprinted by a pervasive greenschist-facies retrogression, high-strain subhorizontal mylonitic foliation, and a prominent WNW-ESE stretching lineation parallel to corrugations on the Boundary Canyon Detachment. Granitic pegmatite dikes are deformed, rotated into parallelism, and boudinaged within the mylonitic foliation. High-U zircons from one muscovite granite dike yield an 85.8 ?? 1.4 Ma age. Below the Monarch Spring fault, retrogression is minor, and amphibolite-facies mineral elongation lineations plunge gently north to northeast. Multiple generations of variably deformed dikes, sills, and leucosomal segregations indicate a more complex history of partial melting and intrusion compared to that above the Monarch Spring fault, but thermobarometry on garnet amphibolites above and below the Monarch Spring fault record similar peak conditions of 620-680 ??C and 7-9 kbar, indicating minor (<3-5 km) structural omission across the Monarch Spring fault. Discordant SHRIMP-RG U-Pb zircon ages and 75-88 Ma Th-Pb monazite ages from leucosomal segregations in paragneisses suggest that partial melting of Proterozoic sedimentary protoliths was a source for the structurally higher 86 Ma pegmatites. Two weakly deformed two-mica leucogranite dikes that cut the high-grademetamorphic fabrics below the Monarch Spring fault yield 62.3 ?? 2.6 and 61.7 ?? 4.7 Ma U-Pb zircon ages, and contain 1.5-1.7 Ga cores. The similarity of metamorphic, leuco-some, and pegmatite ages to the period of Sevier belt thrusting and the period of most voluminous Sierran arc magmatism suggests that both burial by thrusting and regional magmatic heating contributed to metamorphism and subsequent partial melting. ??2007 Geological Society of America. All rights reserved.
Computational Challenges of 3D Radiative Transfer in Atmospheric Models
NASA Astrophysics Data System (ADS)
Jakub, Fabian; Bernhard, Mayer
2017-04-01
The computation of radiative heating and cooling rates is one of the most expensive components in todays atmospheric models. The high computational cost stems not only from the laborious integration over a wide range of the electromagnetic spectrum but also from the fact that solving the integro-differential radiative transfer equation for monochromatic light is already rather involved. This lead to the advent of numerous approximations and parameterizations to reduce the cost of the solver. One of the most prominent one is the so called independent pixel approximations (IPA) where horizontal energy transfer is neglected whatsoever and radiation may only propagate in the vertical direction (1D). Recent studies implicate that the IPA introduces significant errors in high resolution simulations and affects the evolution and development of convective systems. However, using fully 3D solvers such as for example MonteCarlo methods is not even on state of the art supercomputers feasible. The parallelization of atmospheric models is often realized by a horizontal domain decomposition, and hence, horizontal transfer of energy necessitates communication. E.g. a cloud's shadow at a low zenith angle will cast a long shadow and potentially needs to communication through a multitude of processors. Especially light in the solar spectral range may travel long distances through the atmosphere. Concerning highly parallel simulations, it is vital that 3D radiative transfer solvers put a special emphasis on parallel scalability. We will present an introduction to intricacies computing 3D radiative heating and cooling rates as well as report on the parallel performance of the TenStream solver. The TenStream is a 3D radiative transfer solver using the PETSc framework to iteratively solve a set of partial differential equation. We investigate two matrix preconditioners, (a) geometric algebraic multigrid preconditioning(MG+GAMG) and (b) block Jacobi incomplete LU (ILU) factorization. The TenStream solver is tested for up to 4096 cores and shows a parallel scaling efficiency of 80-90% on various supercomputers.
Biomechanical Effects of Stiffness in Parallel With the Knee Joint During Walking.
Shamaei, Kamran; Cenciarini, Massimo; Adams, Albert A; Gregorczyk, Karen N; Schiffman, Jeffrey M; Dollar, Aaron M
2015-10-01
The human knee behaves similarly to a linear torsional spring during the stance phase of walking with a stiffness referred to as the knee quasi-stiffness. The spring-like behavior of the knee joint led us to hypothesize that we might partially replace the knee joint contribution during stance by utilizing an external spring acting in parallel with the knee joint. We investigated the validity of this hypothesis using a pair of experimental robotic knee exoskeletons that provided an external stiffness in parallel with the knee joints in the stance phase. We conducted a series of experiments involving walking with the exoskeletons with four levels of stiffness, including 0%, 33%, 66%, and 100% of the estimated human knee quasi-stiffness, and a pair of joint-less replicas. The results indicated that the ankle and hip joints tend to retain relatively invariant moment and angle patterns under the effects of the exoskeleton mass, articulation, and stiffness. The results also showed that the knee joint responds in a way such that the moment and quasi-stiffness of the knee complex (knee joint and exoskeleton) remains mostly invariant. A careful analysis of the knee moment profile indicated that the knee moment could fully adapt to the assistive moment; whereas, the knee quasi-stiffness fully adapts to values of the assistive stiffness only up to ∼80%. Above this value, we found biarticular consequences emerge at the hip joint.
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
Origin of conductivity anomalies in the asthenosphere
NASA Astrophysics Data System (ADS)
Yoshino, T.; Zhang, B.
2013-12-01
Electrical conductivity anomalies with anisotropy parallel to the plate motion have been observed beneath the oceanic lithosphere by electromagnetic studies (e.g., Evans et al., 2005; Baba et al., 2010; Naif et al., 2013). Electrical conductivity of the oceanic asthenosphere at ~100 km depth is very high, about 10-2 to 10-1 S/m. This zone is also known in seismology as the low velocity zone. Since Karato (1990) first suggested that electrical conductivity is sensitive to water content in NAMs, softening of asthenosphere has been regarded as a good indicator for constraining the distribution of water. There are two difficulties to explain the observed conductivity features in the asthenosphere. Recent publications on electrical conductivity of hydrous olivine suggested that olivine with the maximum soluble H2O content at the top of the asthenosphere has much lower conductivity less than 0.1 S/m (e.g., Yoshino et al., 2006; 2009a; Poe et al., 2010; Du Frane and Tyburczy, 2012; Yang, 2012), which is a typical value of conductivity anomaly observed in the oceanic mantle. Partial melting has been considered as an attractive agent for substantially raising the conductivity in this region (Shankland and Waff, 1977), because basaltic melt has greater electrical conductivity (> 100.5 S/m) and high wetting properties. However, dry mantle peridotite cannot reach the solidus temperature at depth 100 km. Volatile components can dramatically reduce melting temperature, even if its amount is very small. Recent studies on conductivity measurement of volatile-bearing melt suggest that conductivity of melt dramatically increases with increasing volatile components (H2O: Ni et al., 2010a, b; CO2: Gaillard et al., 2008; Yoshino et al., 2010; 2012a). Because incipient melt includes higher amount of volatile components, conductivity enhancement by the partial melt is very effective at temperatures just above that of the volatile-bearing peridotite solidus. In this study, the electrical conductivity of peridotite with trace amount of volatile phases was measured in single crystal olivine capsule to protect escape of water from the sample at 3 GPa. The conductivity values were significantly higher than those of dry peridotite, suggesting that the observed conductivity anomalies at the asthenosphere are caused by a presence of trace amount of volatile component in fluid or melt. On the other hand, conductivity of partial molten peridotite measured under shear showed that the conductivity parallel to the shear direction becomes one order of magnitude higher than that normal direction. These observations suggest that partial melting can explain softening and the observed geophysical anomalies of asthenosphere.
An observational and thermodynamic investigation of carbonate partial melting
NASA Astrophysics Data System (ADS)
Floess, David; Baumgartner, Lukas P.; Vonlanthen, Pierre
2015-01-01
Melting experiments available in the literature show that carbonates and pelites melt at similar conditions in the crust. While partial melting of pelitic rocks is common and well-documented, reports of partial melting in carbonates are rare and ambiguous, mainly because of intensive recrystallization and the resulting lack of criteria for unequivocal identification of melting. Here we present microstructural, textural, and geochemical evidence for partial melting of calcareous dolomite marbles in the contact aureole of the Tertiary Adamello Batholith. Petrographic observations and X-ray micro-computed tomography (X-ray μCT) show that calcite crystallized either in cm- to dm-scale melt pockets, or as an interstitial phase forming an interconnected network between dolomite grains. Calcite-dolomite thermometry yields a temperature of at least 670 °C, which is well above the minimum melting temperature of ∼600 °C reported for the CaO-MgO-CO2-H2O system. Rare-earth element (REE) partition coefficients (KDcc/do) range between 9-35 for adjacent calcite-dolomite pairs. These KD values are 3-10 times higher than equilibrium values between dolomite and calcite reported in the literature. They suggest partitioning of incompatible elements into a melt phase. The δ18O and δ13C isotopic values of calcite and dolomite support this interpretation. Crystallographic orientations measured by electron backscattered diffraction (EBSD) show a clustering of c-axes for dolomite and interstitial calcite normal to the foliation plane, a typical feature for compressional deformation, whereas calcite crystallized in pockets shows a strong clustering of c-axes parallel to the pocket walls, suggesting that it crystallized after deformation had stopped. All this together suggests the formation of partial melts in these carbonates. A Schreinemaker analysis of the experimental data for a CO2-H2O fluid-saturated system indeed predicts formation of calcite-rich melt between 650-880 °C, in agreement with our observations of partial melting. The presence of partial melts in crustal carbonates has important physical and chemical implications, including a drastic drop in rock viscosity and significant change in the dynamics and distribution of fluids within both the contact aureole and the intrusive body.
Qi, Delin; Chao, Yan; Guo, Songchang; Zhao, Lanying; Li, Taiping; Wei, Fulei; Zhao, Xinquan
2012-01-01
Schizothoracine fishes distributed in the water system of the Qinghai-Tibetan plateau (QTP) and adjacent areas are characterized by being highly adaptive to the cold and hypoxic environment of the plateau, as well as by a high degree of diversity in trophic morphology due to resource polymorphisms. Although convergent and parallel evolution are prevalent in the organisms of the QTP, it remains unknown whether similar evolutionary patterns have occurred in the schizothoracine fishes. Here, we constructed for the first time a tentative molecular phylogeny of the schizothoracine fishes based on the complete sequences of the cytochrome b gene. We employed this molecular phylogenetic framework to examine the evolution of trophic morphologies. We used Pagel's maximum likelihood method to estimate the evolutionary associations of trophic morphologies and food resource use. Our results showed that the molecular and published morphological phylogenies of Schizothoracinae are partially incongruent with respect to some intergeneric relationships. The phylogenetic results revealed that four character states of five trophic morphologies and of food resource use evolved at least twice during the diversification of the subfamily. State transitions are the result of evolutionary patterns including either convergence or parallelism or both. Furthermore, our analyses indicate that some characters of trophic morphologies in the Schizothoracinae have undergone correlated evolution, which are somewhat correlated with different food resource uses. Collectively, our results reveal new examples of convergent and parallel evolution in the organisms of the QTP. The adaptation to different trophic niches through the modification of trophic morphologies and feeding behaviour as found in the schizothoracine fishes may account for the formation and maintenance of the high degree of diversity and radiations in fish communities endemic to QTP. PMID:22470515
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
Performance Evaluation and Modeling Techniques for Parallel Processors. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dimpsey, Robert Tod
1992-01-01
In practice, the performance evaluation of supercomputers is still substantially driven by singlepoint estimates of metrics (e.g., MFLOPS) obtained by running characteristic benchmarks or workloads. With the rapid increase in the use of time-shared multiprogramming in these systems, such measurements are clearly inadequate. This is because multiprogramming and system overhead, as well as other degradations in performance due to time varying characteristics of workloads, are not taken into account. In multiprogrammed environments, multiple jobs and users can dramatically increase the amount of system overhead and degrade the performance of the machine. Performance techniques, such as benchmarking, which characterize performance on a dedicated machine ignore this major component of true computer performance. Due to the complexity of analysis, there has been little work done in analyzing, modeling, and predicting the performance of applications in multiprogrammed environments. This is especially true for parallel processors, where the costs and benefits of multi-user workloads are exacerbated. While some may claim that the issue of multiprogramming is not a viable one in the supercomputer market, experience shows otherwise. Even in recent massively parallel machines, multiprogramming is a key component. It has even been claimed that a partial cause of the demise of the CM2 was the fact that it did not efficiently support time-sharing. In the same paper, Gordon Bell postulates that, multicomputers will evolve to multiprocessors in order to support efficient multiprogramming. Therefore, it is clear that parallel processors of the future will be required to offer the user a time-shared environment with reasonable response times for the applications. In this type of environment, the most important performance metric is the completion of response time of a given application. However, there are a few evaluation efforts addressing this issue.
On a model of three-dimensional bursting and its parallel implementation
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.
2008-04-01
A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.
Ruymgaart, A Peter; Elber, Ron
2012-11-13
We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).
Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-01-01
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813
Dinç, Erdal; Ertekin, Zehra Ceren
2016-01-01
An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Konduri, Aditya
Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.
Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-06-24
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
Recognition of partially occluded threat objects using the annealed Hopefield network
NASA Technical Reports Server (NTRS)
Kim, Jung H.; Yoon, Sung H.; Park, Eui H.; Ntuen, Celestine A.
1992-01-01
Recognition of partially occluded objects has been an important issue to airport security because occlusion causes significant problems in identifying and locating objects during baggage inspection. The neural network approach is suitable for the problems in the sense that the inherent parallelism of neural networks pursues many hypotheses in parallel resulting in high computation rates. Moreover, they provide a greater degree of robustness or fault tolerance than conventional computers. The annealed Hopfield network which is derived from the mean field annealing (MFA) has been developed to find global solutions of a nonlinear system. In the study, it has been proven that the system temperature of MFA is equivalent to the gain of the sigmoid function of a Hopfield network. In our early work, we developed the hybrid Hopfield network (HHN) for fast and reliable matching. However, HHN doesn't guarantee global solutions and yields false matching under heavily occluded conditions because HHN is dependent on initial states by its nature. In this paper, we present the annealed Hopfield network (AHN) for occluded object matching problems. In AHN, the mean field theory is applied to the hybird Hopfield network in order to improve computational complexity of the annealed Hopfield network and provide reliable matching under heavily occluded conditions. AHN is slower than HHN. However, AHN provides near global solutions without initial restrictions and provides less false matching than HHN. In conclusion, a new algorithm based upon a neural network approach was developed to demonstrate the feasibility of the automated inspection of threat objects from x-ray images. The robustness of the algorithm is proved by identifying occluded target objects with large tolerance of their features.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
Betts, S. D.; King, J.
1998-01-01
Off-pathway intermolecular interactions between partially folded polypeptide chains often compete with correct intramolecular interactions, resulting in self-association of folding intermediates into the inclusion body state. Intermediates for both productive folding and off-pathway aggregation of the parallel beta-coil tailspike trimer of phage P22 have been identified in vivo and in vitro using native gel electrophoresis in the cold. Aggregation of folding intermediates was suppressed when refolding was initiated and allowed to proceed for a short period at 0 degrees C prior to warming to 20 degrees C. Yields of refolded tailspike trimers exceeding 80% were obtained using this temperature-shift procedure, first described by Xie and Wetlaufer (1996, Protein Sci 5:517-523). We interpret this as due to stabilization of the thermolabile monomeric intermediate at the junction between productive folding and off-pathway aggregation. Partially folded monomers, a newly identified dimer, and the protrimer folding intermediates were populated in the cold. These species were electrophoretically distinguished from the multimeric intermediates populated on the aggregation pathway. The productive protrimer intermediate is disulfide bonded (Robinson AS, King J, 1997, Nat Struct Biol 4:450-455), while the multimeric aggregation intermediates are not disulfide bonded. The partially folded dimer appears to be a precursor to the disulfide-bonded protrimer. The results support a model in which the junctional partially folded monomeric intermediate acquires resistance to aggregation in the cold by folding further to a conformation that is activated for correct recognition and subunit assembly. PMID:9684883
Henkelmann, Ralf; Schneider, Sebastian; Müller, Daniel; Gahr, Ralf; Josten, Christoph; Böhme, Jörg
2017-03-14
Partial or complete immobilization leads to different adjustment processes like higher risk of muscle atrophy or a decrease of general performance. The present study is designed to prove efficacy of the anti-gravity treadmill (alter G®) compared to a standard rehabilitation protocol in patients with tibial plateau (group 1)or ankle fractures (group 2) with six weeks of partial weight bearing of 20 kg. This prospective randomized study will include a total of 60 patients for each group according to predefined inclusion and exclusion criteria. 1:1 randomization will be performed centrally via fax supported by the Clinical Trial Centre Leipzig (ZKS Leipzig). Patients in the treatment arm will be treated with an anti-gravity treadmill (alter G®) instead of physiotherapy. The protocol is designed parallel to standard physiotherapy with a frequency of two to three times of training with the treadmill per week with duration of 20 min for six weeks. Up to date no published randomized controlled trial with an anti-gravity treadmill is available. The findings of this study can help to modify rehabilitation of patients with partial weight bearing due to their injury or postoperative protocol. It will deliver interesting results if an anti-gravity treadmill is useful in rehabilitation in those patients. Further ongoing studies will identify different indications for an anti-gravity treadmill. Thus, in connection with those studies, a more valid statement regarding safety and efficacy is possible. NCT02790229 registered on May 29, 2016.
Vanneste, Sven; De Ridder, Dirk
2012-01-01
Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375
NASA Technical Reports Server (NTRS)
Murray, G. W.; Bohning, O. D.; Kinoshita, R. Y.; Becker, F. J.
1979-01-01
The results are summarized of a program to demonstrate the feasibility of Bubble Domain Memory Technology as a mass memory medium for spacecraft applications. The design, fabrication and test of a partially populated 10 to the 8th power Bit Data Recorder using 100 Kbit serial bubble memory chips is described. Design tradeoffs, design approach and performance are discussed. This effort resulted in a 10 to the 8th power bit recorder with a volume of 858.6 cu in and a weight of 47.2 pounds. The recorder is plug reconfigurable, having the capability of operating as one, two or four independent serial channel recorders or as a single sixteen bit byte parallel input recorder. Data rates up to 1.2 Mb/s in a serial mode and 2.4 Mb/s in a parallel mode may be supported. Fabrication and test of the recorder demonstrated the basic feasibility of Bubble Domain Memory technology for such applications. Test results indicate the need for improvement in memory element operating temperature range and detector performance.
Fine Structure in Helium-like Fluorine by Fast-Beam Laser Spectroscopy
NASA Astrophysics Data System (ADS)
Myers, E. G.; Thompson, J. K.; Silver, J. D.
1998-05-01
With the aim of providing an additional precise test of higher-order corrections to high precision calculations of fine structure in helium and helium-like ions(T. Zhang, Z.-C. Yan and G.W.F. Drake, Phys. Rev. Lett. 77), 1715 (1996)., a measurement of the 2^3P_2,F - 2^3P_1,F' fine structure in ^19F^7+ is in progress. The method involves doppler-tuned laser spectroscopy using a CO2 laser on a foil-stripped fluorine ion beam. We aim to achieve a higher precision, compared to an earlier measurement(E.G. Myers, P. Kuske, H.J. Andrae, I.A. Armour, H.A. Klein, J.D. Silver, and E. Traebert, Phys. Rev. Lett. 47), 87 (1981)., by using laser beams parallel and anti-parallel to the ion beam, to obtain partial cancellation of the doppler shift(J.K. Thompson, D.J.H. Howie and E.G. Myers, Phys. Rev. A 57), 180 (1998).. A calculation of the hyperfine structure, allowing for relativistic, QED and nuclear size effects, will be required to obtain the ``hyperfine-free'' fine structure interval from the measurements.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Cell-to-cell variation and specialization in sugar metabolism in clonal bacterial populations
Schreiber, Frank; Dal Co, Alma; Kiviet, Daniel J.; Littmann, Sten
2017-01-01
While we have good understanding of bacterial metabolism at the population level, we know little about the metabolic behavior of individual cells: do single cells in clonal populations sometimes specialize on different metabolic pathways? Such metabolic specialization could be driven by stochastic gene expression and could provide individual cells with growth benefits of specialization. We measured the degree of phenotypic specialization in two parallel metabolic pathways, the assimilation of glucose and arabinose. We grew Escherichia coli in chemostats, and used isotope-labeled sugars in combination with nanometer-scale secondary ion mass spectrometry and mathematical modeling to quantify sugar assimilation at the single-cell level. We found large variation in metabolic activities between single cells, both in absolute assimilation and in the degree to which individual cells specialize in the assimilation of different sugars. Analysis of transcriptional reporters indicated that this variation was at least partially based on cell-to-cell variation in gene expression. Metabolic differences between cells in clonal populations could potentially reduce metabolic incompatibilities between different pathways, and increase the rate at which parallel reactions can be performed. PMID:29253903
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Xiao-Chuan; Keyes, David; Yang, Chao
2014-09-29
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less
Radiation torque on an absorptive spherical drop centered on an acoustic helicoidal Bessel beam
NASA Astrophysics Data System (ADS)
Zhang, Likun; Marston, Philip L.
2009-11-01
Circularly polarized electromagnetic waves carry axial angular momentum and analysis shows that the axial radiation torque on an illuminated sphere is proportional to the power absorbed by the sphere [1]. Helicoidal acoustic beams also carry axial angular momentum and absorption of such a beam should also produce an axial radiation torque [2]. In the present work the acoustic radiation torque on solid spheres and spherical drops centered on acoustic helicoidal Bessel beams is examined. The torque is predicted to be proportional to the ratio of the absorbed power to the acoustic frequency. Depending on the beam helicity, the torque is parallel or anti-parallel to the beam axis. The analysis uses a relation between the scattering and the partial wave coefficients for a sphere in a helicoidal Bessel beam. Calculations suggest that beams with a low topological charge are more efficient for generating torques on solid spheres.[4pt] [1] P. L. Marston and J. H. Crichton, Phys. Rev. A. 30, 2508-2516 (1984).[0pt] [2] B. T. Hefner and P. L. Marston, J. Acoust. Soc. Am. 106, 3313-3316 (1999).
Dax1 and Nanog act in parallel to stabilize mouse embryonic stem cells and induced pluripotency
Zhang, Junlei; Liu, Gaoke; Ruan, Yan; Wang, Jiali; Zhao, Ke; Wan, Ying; Liu, Bing; Zheng, Hongting; Peng, Tao; Wu, Wei; He, Ping; Hu, Fu-Quan; Jian, Rui
2014-01-01
Nanog expression is heterogeneous and dynamic in embryonic stem cells (ESCs). However, the mechanism for stabilizing pluripotency during the transitions between Nanoghigh and Nanoglow states is not well understood. Here we report that Dax1 acts in parallel with Nanog to regulate mouse ESC (mESCs) identity. Dax1 stable knockdown mESCs are predisposed towards differentiation but do not lose pluripotency, whereas Dax1 overexpression supports LIF-independent self-renewal. Although partially complementary, Dax1 and Nanog function independently and cannot replace one another. They are both required for full reprogramming to induce pluripotency. Importantly, Dax1 is indispensable for self-renewal of Nanoglow mESCs. Moreover, we report that Dax1 prevents extra-embryonic endoderm (ExEn) commitment by directly repressing Gata6 transcription. Dax1 may also mediate inhibition of trophectoderm differentiation independent or as a downstream effector of Oct4. These findings establish a basal role of Dax1 in maintaining pluripotency during the state transition of mESCs and somatic cell reprogramming. PMID:25284313
Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors
NASA Technical Reports Server (NTRS)
Probst, David K.
1993-01-01
A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.
Trace element evaluation of a suite of rocks from Reunion Island, Indian Ocean
Zielinski, R.A.
1975-01-01
Reunion Island consists of an olivine-basalt shield capped by a series of flows and intrusives ranging from hawaiite through trachyte. Eleven rocks representing the total compositional sequence have been analyzed for U, Th and REE. Eight of the rocks (group 1) have positive-slope, parallel, chondrite-normalized REE fractionation patterns. Using a computer model, the major element compositions of group 1 whole rocks and observed phenocrysts were used to predict the crystallization histories of increasingly residual liquids, and allowed semi-quantitative verification of origin by fractional crystallization of the olivine-basalt parent magma. Results were combined with mineral-liquid distribution coefficient data to predict trace element abundances, and existing data on Cr, Ni, Sr and Ba were also successfully incorporated in the model. The remaining three rocks (group 2) have nonuniform positive-slope REE fractionation patterns not parallel to group 1 patterns. Rare earth fractionation in a syenite is explained by partial melting of a source rich in clinopyroxene and/or hornblende. The other two rocks of group 2 are explained as hybrids resulting from mixing of syenite and magmas of group 1. ?? 1975.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
Model-based vision using geometric hashing
NASA Astrophysics Data System (ADS)
Akerman, Alexander, III; Patton, Ronald
1991-04-01
The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.
[Unusual aspects in the creative pictures of a retarded, ineducable boy].
Lehmann, W
1976-04-01
Drawings and sculptures of an 11-year-old retarded boy are presented, who was not capable of school education. His low-level of lingual-logic thinking contrasts with his almost age-corresponding capability of optic differentiation. His pictorial creations are impressive in their accurate presentation of animals, which are reminiscent of the artistic figures of paleolithic big game hunters. Parallels with the paleolithic hunting magics as characterized by Mirimanow are obvious. The presentation of men in the form of acting head-feeters might be based on an undifferentiated idea of their own body, of partially disabled children with impaired sensorial recognition.
Efficient implementation of a 3-dimensional ADI method on the iPSC/860
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Wijngaart, R.F.
1993-12-31
A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.
Electron Heating at Kinetic Scales in Magnetosheath Turbulence
NASA Technical Reports Server (NTRS)
Chasapis, Alexandros; Matthaeus, W. H.; Parashar, T. N.; Lecontel, O.; Retino, A.; Breuillard, H.; Khotyaintsev, Y.; Vaivads, A.; Lavraud, B.; Eriksson, E.;
2017-01-01
We present a statistical study of coherent structures at kinetic scales, using data from the Magnetospheric Multiscale mission in the Earths magnetosheath. We implemented the multi-spacecraft partial variance of increments (PVI) technique to detect these structures, which are associated with intermittency at kinetic scales. We examine the properties of the electron heating occurring within such structures. We find that, statistically, structures with a high PVI index are regions of significant electron heating. We also focus on one such structure, a current sheet, which shows some signatures consistent with magnetic reconnection. Strong parallel electron heating coincides with whistler emissions at the edges of the current sheet.
Synthesis and degradation of nitrate reductase during the cell cycle of Chlorella sorokiniana
NASA Technical Reports Server (NTRS)
Velasco, P. J.; Tischner, R.; Huffaker, R. C.; Whitaker, J. R.
1989-01-01
Studies on the diurnal variations of nitrate reductase (NR) activity during the life cycle of synchronized Chlorella sorokiniana cells grown with a 7:5 light-dark cycle showed that the NADH:NR activity, as well as the NR partial activities NADH:cytochrome c reductase and reduced methyl viologen:NR, closely paralleled the appearance and disappearance of NR protein as shown by sodium dodecyl sulfate gel electrophoresis and immunoblots. Results of pulse-labeling experiments with [35S]methionine further confirmed that diurnal variations of the enzyme activities can be entirely accounted for by the concomitant synthesis and degradation of the NR protein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiser, C.; Herdies, L.; McIntosh, L.
Higher plant mitochondria posses a cyanide-resistant, hydroxamate-sensitive alternative pathway of electron transport that does not conserve energy. Aging of potato tuber slices for 24 hours leads to the development of an alternative pathway capacity. We have shown that a monoclonal antibody raised against the alternative pathway terminal oxidase of Sauromatum guttatum crossreacts with a protein of similar size in aged potato slice mitochondria. This protein was partially purified and characterized by two-dimensional gel electrophoresis, and its relative levels parallel the rise in cyanide-resistant respiration. We are using a putative clone of the S. guttatum alternative oxidase gene to isolate themore » equivalent gene from potato and to examine its expression.« less
NASA Astrophysics Data System (ADS)
Martinetti, P.; Wallet, J.-C.; Amelino-Camelia, G.
2015-08-01
The conference Conceptual and Technical Challenges for Quantum Gravity at Sapienza University of Rome, from 8 to 12 September 2014, has provided a beautiful opportunity for an encounter between different approaches and different perspectives on the quantum-gravity problem. It contributed to a higher level of shared knowledge among the quantum-gravity communities pursuing each specific research program. There were plenary talks on many different approaches, including in particular string theory, loop quantum gravity, spacetime noncommutativity, causal dynamical triangulations, asymptotic safety and causal sets. Contributions from the perspective of philosophy of science were also welcomed. In addition several parallel sessions were organized. The present volume collects contributions from the Noncommutative Geometry and Quantum Gravity parallel session4, with additional invited contributions from specialists in the field. Noncommutative geometry in its many incarnations appears at the crossroad of many researches in theoretical and mathematical physics: • from models of quantum space-time (with or without breaking of Lorentz symmetry) to loop gravity and string theory, • from early considerations on UV-divergencies in quantum field theory to recent models of gauge theories on noncommutative spacetime, • from Connes description of the standard model of elementary particles to recent Pati-Salam like extensions. This volume provides an overview of these various topics, interesting for the specialist as well as accessible to the newcomer. 4partially funded by CNRS PEPS /PTI ''Metric aspect of noncommutative geometry: from Monge to Higgs''
Wilson, Robert L.; Frisz, Jessica F.; Hanafin, William P.; Carpenter, Kevin J.; Hutcheon, Ian D.; Weber, Peter K.; Kraft, Mary L.
2014-01-01
The local abundance of specific lipid species near a membrane protein is hypothesized to influence the protein’s activity. The ability to simultaneously image the distributions of specific protein and lipid species in the cell membrane would facilitate testing these hypotheses. Recent advances in imaging the distribution of cell membrane lipids with mass spectrometry have created the desire for membrane protein probes that can be simultaneously imaged with isotope labeled lipids. Such probes would enable conclusive tests of whether specific proteins co-localize with particular lipid species. Here, we describe the development of fluorine-functionalized colloidal gold immunolabels that facilitate the detection and imaging of specific proteins in parallel with lipids in the plasma membrane using high-resolution SIMS performed with a NanoSIMS. First, we developed a method to functionalize colloidal gold nanoparticles with a partially fluorinated mixed monolayer that permitted NanoSIMS detection and rendered the functionalized nanoparticles dispersible in aqueous buffer. Then, to allow for selective protein labeling, we attached the fluorinated colloidal gold nanoparticles to the nonbinding portion of antibodies. By combining these functionalized immunolabels with metabolic incorporation of stable isotopes, we demonstrate that influenza hemagglutinin and cellular lipids can be imaged in parallel using NanoSIMS. These labels enable a general approach to simultaneously imaging specific proteins and lipids with high sensitivity and lateral resolution, which may be used to evaluate predictions of protein co-localization with specific lipid species. PMID:22284327
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.C.; Hashida, T.; Takahashi, H.
1998-03-01
The fracture mode and crack propagation behavior of brittle fracture at 77 and 4 K in an 18Cr-18Mn-0.7N austenitic stainless were investigated using optical and scanning electron microscopy. The fracture path was examined by observing the side surface in a partially ruptured specimen. The relationship of the fracture facets to the microstructures were established by observing the fracture surface and the adjacent side surface simultaneously. Three kinds of fracture facets were identified at either temperature. The first is a smooth curved intergranular fracture facet with characteristic parallel lines on it. The second is a fairly planar facet formed by partingmore » along an annealing twin boundary, a real {l_brace}111{r_brace} plane. There are three sets of parallel lines on the facet and the lines in different sets intersect at 60 deg. The third is a lamellar transgranular fracture facet with sets of parallel steps on it. Fracture propagated by the formation of microcracks on a grain boundary, annealing twin boundary, and coalescence of these cracks. The observation suggests that the ease of crack initiation and propagation along the grain boundary and the annealing twin boundary may be the main reason for the low-temperature brittleness of this steel. A mechanism for grain boundary cracking, including annealing twin boundary parting, has been discussed based on the stress concentration induced by impinging planar deformation structures on the grain boundaries.« less
Internal viscoelastic loading in cat papillary muscle.
Chiu, Y L; Ballou, E W; Ford, L E
1982-01-01
The passive mechanical properties of myocardium were defined by measuring force responses to rapid length ramps applied to unstimulated cat papillary muscles. The immediate force changes following these ramps recovered partially to their initial value, suggesting a series combination of viscous element and spring. Because the stretched muscle can bear force at rest, the viscous element must be in parallel with an additional spring. The instantaneous extension-force curves measured at different lengths were nonlinear, and could be made to superimpose by a simple horizontal shift. This finding suggests that the same spring was being measured at each length, and that this spring was in series with both the viscous element and its parallel spring (Voigt configuration), so that the parallel spring is held nearly rigid by the viscous element during rapid steps. The series spring in the passive muscle could account for most of the series elastic recoil in the active muscle, suggesting that the same spring is in series with both the contractile elements and the viscous element. It is postulated that the viscous element might be coupled to the contractile elements by a compliance, so that the load imposed on the contractile elements by the passive structures is viscoelastic rather than purely viscous. Such a viscoelastic load would give the muscle a length-independent, early diastolic restoring force. The possibility is discussed that the length-independent restoring force would allow some of the energy liberated during active shortening to be stored and released during relaxation. Images FIGURE 7 FIGURE 8 PMID:7171707
PETSc Users Manual Revision 3.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself. ;For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico
Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy ismore » to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics predicted by the PPM approach with respect to the number of consecutive cycles required for each parallel simulation is also investigated. It is shown that this new approach is able to give accurate predictions of the CCV in fired engines in less than one-tenth of the time required for the conventional approach of simulating consecutive engine cycles.« less
Magma-assisted strain localization in an orogen-parallel transcurrent shear zone of southern Brazil
NASA Astrophysics Data System (ADS)
Tommasi, AndréA.; Vauchez, Alain; Femandes, Luis A. D.; Porcher, Carla C.
1994-04-01
In a lithospheric-scale, orogen-parallel transcurrent shear zone of the Pan-African Dom Feliciano belt of southern Brazil, two successive generations of magmas, an early calc-alkaline and a late peraluminous, have been emplaced during deformation. Microstructures show that these granitoids experienced a progressive deformation from magmatic to solid state under decreasing temperature conditions. Magmatic deformation is indicated by the coexistence of aligned K-feldspar, plagioclase, micas, and/or tourmaline with undeformed quartz. Submagmatic deformation is characterized by strain features, such as fractures, lattice bending, or replacement reactions affecting only the early crystallized phases. High-temperature solid-state deformation is characterized by extensive grain boundary migration in quartz, myrmekitic K-feldspar replacement, and dynamic recrystallization of both K-feldspar and plagioclase. Decreasing temperature during solid-state deformation is inferred from changes in quartz crystallographic fabrics, decrease in grain size of recrystallized feldspars, and lower Ti amount in recrystallized biotites. Final low-temperature deformation is characterized by feldspar replacement by micas. The geochemical evolution of the synkinematic magmatism, from calc-alkaline metaluminous granodiorites with intermediate 87Sr/86Sr initial ratio to peraluminous granites with very high 87Sr/86Sr initial ratio, suggests an early lower crustal source or a mixed mantle/crustal source, followed by a middle to upper crustal source for the melts. Shearing in lithospheric faults may induce partial melting in the lower crust by shear heating in the upper mantle, but, whatever the process initiating partial melting, lithospheric transcurrent shear zones may collect melt at different depths. Because they enhance the vertical permeability of the crust, these zones may then act as heat conductors (by advection), promoting an upward propagation of partial melting in the crust. Synkinematic granitoids localize most, if not all, deformation in the studied shear zone. The regional continuity and the pervasive character of the magmatic fabric in the various synkinematic granitic bodies, consistently displaying similar plane and direction of flow, argue for accommodation of large amounts of orogen-parallel movement by viscous deformation of these magmas. Moreover, activation of high-temperature deformation mechanisms probably allowed a much easier deformation of the hot synkinematic granites than of the colder country rock and, consequently, contributed significantly to the localization of deformation. Finally, the small extent of the low-temperature deformation suggests that the strike-slip deformation ended approximately synchronously with the final cooling of the peraluminous granites. The evolution of the deformation reflects the strong influence of synkinematic magma emplacement and subsequent cooling on the thermomechanical evolution of the shear zone. Magma intrusion in an orogen-scale transcurrent shear zone deeply modifies the rheological behavior of the continental crust. It triggers an efficient thermomechanical softening localized within the fault that may subsist long enough for large displacements to be accommodated. Therefore the close association of deformation and synkinematic magmatism probably represents an important factor controlling the mechanical response of continental plates in collisional environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Spencer, J.E.
1999-01-01
In the common type of industrial continuous casting, partially molten metal is extruded from a vessel through a shaped orifice called a mold in which the metal assumes the cross-sectional form of the mold as it cools and solidifies. Continuous casting can be sustained as long as molten metal is supplied and thermal conditions are maintained. I propose that a similar process produced parallel sets of grooves in three geologic settings, as follows: (1) corrugated metamorphic core complexes where mylonized mid-crustal rocks were exhumed by movement along low-angle normal faults known as detachment faults; (2) corrugated submarine surfaces where ultramafic and mafic rocks were exhumed by normal faulting within oceanic spreading centers; and (3) striated magma extrusions exemplified by the famous grooved outcrops at the Inca fortress of Sacsayhuaman in Peru. In each case, rocks inferred to have overlain the corrugated surface during corrugation genesis molded and shaped a plastic to partially molten rock mass as it was extruded from a moderate- to high-temperature reservoir.
Effects of three types of lecture notes on medical student achievement.
Russell, I J; Caris, T N; Harris, G D; Hendricson, W D
1983-08-01
Two parallel studies were conducted with junior medical students to determine what influence the forms of lecture notes would have on learning. The three types of notes given to the students were: a comprehensive manuscript of the lecture containing text, tables, and figures; a partial handout which included some illustrations but required substantial annotation by the students; and a skeleton outline containing no data from the lecture. The students' knowledge about the subject was measured before the lecture, immediately after the lecture, two to four weeks later, and approximately three months later. The students' responses to questionnaires indicated a strong preference for very detained handouts as essential to preparation for examinations. By contract, the students' performances on tests generally were better for those who had received the partial or skeleton handout formats. This was particularly true for information presented during the last quarter of each lecture, when learning efficiency of the skeleton handout group increased while the other two handout groups exhibited learning fatigue. It was concluded that learning by medical students was improved when they recorded notes in class.
NASA Astrophysics Data System (ADS)
Perreault, William E.; Mukherjee, Nandini; Zare, Richard N.
2018-05-01
Molecular interactions are best probed by scattering experiments. Interpretation of these studies has been limited by lack of control over the quantum states of the incoming collision partners. We report here the rotationally inelastic collisions of quantum-state prepared deuterium hydride (HD) with H2 and D2 using a method that provides an improved control over the input states. HD was coexpanded with its partner in a single supersonic beam, which reduced the collision temperature to 0-5 K, and thereby restricted the involved incoming partial waves to s and p. By preparing HD with its bond axis preferentially aligned parallel and perpendicular to the relative velocity of the colliding partners, we observed that the rotational relaxation of HD depends strongly on the initial bond-axis orientation. We developed a partial-wave analysis that conclusively demonstrates that the scattering mechanism involves the exchange of internal angular momentum between the colliding partners. The striking differences between H2/HD and D2/HD scattering suggest the presence of anisotropically sensitive resonances.
Colocalization of cellular nanostructure using confocal fluorescence and partial wave spectroscopy.
Chandler, John E; Stypula-Cyrus, Yolanda; Almassalha, Luay; Bauer, Greta; Bowen, Leah; Subramanian, Hariharan; Szleifer, Igal; Backman, Vadim
2017-03-01
A new multimodal confocal microscope has been developed, which includes a parallel Partial Wave Spectroscopic (PWS) microscopy path. This combination of modalities allows molecular-specific sensing of nanoscale intracellular structure using fluorescent labels. Combining molecular specificity and sensitivity to nanoscale structure allows localization of nanostructural intracellular changes, which is critical for understanding the mechanisms of diseases such as cancer. To demonstrate the capabilities of this multimodal instrument, we imaged HeLa cells treated with valinomycin, a potassium ionophore that uncouples oxidative phosphorylation. Colocalization of fluorescence images of the nuclei (Hoechst 33342) and mitochondria (anti-mitochondria conjugated to Alexa Fluor 488) with PWS measurements allowed us to detect a significant decrease in nuclear nanoscale heterogeneity (Σ), while no significant change in Σ was observed at mitochondrial sites. In addition, application of the new multimodal imaging approach was demonstrated on human buccal samples prepared using a cancer screening protocol. These images demonstrate that nanoscale intracellular structure can be studied in healthy and diseased cells at molecular-specific sites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
Tarutta, E P; Milash, S V; Tarasova, N A; Romanova, L I; Markosian, G A; Epishina, M V
2014-01-01
To determine the posterior pole contour of the eye based on the relative peripheral refractive error and relative eye length. A parallel study was performed, which enrolled 38 children (76 eyes) with myopia from -1.25 to -10.82 diopters. The patients underwent peripheral refraction assessment with WR-5100K Binocular Auto Refractometer ("Grand Seiko", Japan) and partial coherence tomography with IOLMaster ("Carl Zeiss", Germany) for the relative eye length in areas located 15 and 30 degrees nasal and temporal from the central fovea along the horizontal meridian. In general, refractometry and interferometry showed high coincidence of defocus signs and values for the areas located 15 and 30 degrees nasal as well as 15 degrees temporal from the fovea. However, in 41% of patients defocus signs determined by the two methods mismatched in one or more areas. Most of the mismatch cases were mild myopia. We suppose that such a mismatch is caused by optical peculiarities of the anterior eye segment that have an impact on refractometry results.
NASA Astrophysics Data System (ADS)
Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju
2018-02-01
A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.
Manzano, Aránzazu; Herranz, Raúl; den Toom, Leonardus A; Te Slaa, Sjoerd; Borst, Guus; Visser, Martijn; Medina, F Javier; van Loon, Jack J W A
2018-01-01
Clinostats and Random Positioning Machine (RPM) are used to simulate microgravity, but, for space exploration, we need to know the response of living systems to fractional levels of gravity (partial gravity) as they exist on Moon and Mars. We have developed and compared two different paradigms to simulate partial gravity using the RPM, one by implementing a centrifuge on the RPM (RPM HW ), the other by applying specific software protocols to driving the RPM motors (RPM SW ). The effects of the simulated partial gravity were tested in plant root meristematic cells, a system with known response to real and simulated microgravity. Seeds of Arabidopsis thaliana were germinated under simulated Moon (0.17 g ) and Mars (0.38 g ) gravity. In parallel, seeds germinated under simulated microgravity (RPM), or at 1 g control conditions. Fixed root meristematic cells from 4-day grown seedlings were analyzed for cell proliferation rate and rate of ribosome biogenesis using morphometrical methods and molecular markers of the regulation of cell cycle and nucleolar activity. Cell proliferation appeared increased and cell growth was depleted under Moon gravity, compared with the 1 g control. The effects were even higher at the Moon level than at simulated microgravity, indicating that meristematic competence (balance between cell growth and proliferation) is also affected at this gravity level. However, the results at the simulated Mars level were close to the 1 g static control. This suggests that the threshold for sensing and responding to gravity alteration in the root would be at a level intermediate between Moon and Mars gravity. Both partial g simulation strategies seem valid and show similar results at Moon g -levels, but further research is needed, in spaceflight and simulation facilities, especially around and beyond Mars g levels to better understand more precisely the differences and constrains in the use of these facilities for the space biology community.
Li, Zhaosha; Blad, Clara C; van der Sluis, Ronald J; de Vries, Henk; Van Berkel, Theo J C; Ijzerman, Adriaan P; Hoekstra, Menno
2012-10-01
Niacin can effectively treat dyslipidaemic disorders. However, its clinical use is limited due to the cutaneous flushing mediated by the nicotinic acid receptor HCA(2) . In the current study, we evaluated two partial agonists for HCA(2) , LUF6281 and LUF6283, with respect to their anti-dyslipidaemic potential and cutaneous flushing effect. In vitro potency and efficacy studies with niacin and the two HCA(2) partial agonists were performed using HEK293T cells stably expressing human HCA(2) . Normolipidaemic C57BL/6 mice received either niacin or the HCA(2) partial agonists (400 mg·kg(-1) ·day(-1) ) once a day for 4 weeks for evaluation of their effects in vivo. Radioligand competitive binding assay showed K(i) values for LUF6281 and LUF6283 of 3 and 0.55 µM. [(35) S]-GTPγS binding revealed the rank order of their potency as niacin > LUF6283 > LUF6281. All three compounds reduced plasma VLDL-triglyceride concentrations similarly, while LUF6281 and LUF6283, in contrast to niacin, did not also exhibit the unwanted flushing side effect in C57BL/6 mice. Niacin reduced the expression of lipolytic genes HSL and ATGL in adipose tissue by 50%, whereas LUF6281 and LUF6283 unexpectedly did not. In contrast, the decrease in VLDL-triglyceride concentration induced by LUF6281 and LUF6283 was associated with a parallel >40% reduced expression of APOB within the liver. The current study identifies LUF6281 and LUF6283, two HCA(2) partial agonists of the pyrazole class, as promising drug candidates to achieve the beneficial lipid lowering effect of niacin without producing the unwanted flushing side effect. © 2012 The Authors. British Journal of Pharmacology © 2012 The British Pharmacological Society.
Hewett, Zoe L; Pumpa, Kate L; Smith, Caroline A; Fahey, Paul P; Cheema, Birinder S
2018-04-01
The purpose of this study was to investigate the effect of 16 weeks of Bikram yoga on perceived stress, self-efficacy and health related quality of life (HRQoL) in sedentary, stressed adults. 16 week, parallel-arm, randomised controlled trial with flexible dosing. Physically inactive, stressed adults (37.2±10.8 years) were randomised to Bikram yoga (three to five classes per week) or control (no treatment) group for 16 weeks. Outcome measures, collected via self-report, included perceived stress, general self-efficacy, and HRQoL. Outcomes were assessed at baseline, midpoint and completion. Individuals were randomised to the experimental (n=29) or control group (n=34). Average attendance in the experimental group was 27±18 classes. Repeated measure analyses of variance (intention-to-treat) demonstrated significantly improved perceived stress (p=0.003, partial η 2 =0.109), general self-efficacy (p=0.034, partial η 2 =0.056), and the general health (p=0.034, partial η 2 =0.058) and energy/fatigue (p=0.019, partial η 2 =0.066) domains of HRQoL in the experimental group versus the control group. Attendance was significantly associated with reductions in perceived stress, and an increase in several domains of HRQoL. 16 weeks of Bikram yoga significantly improved perceived stress, general self-efficacy and HRQoL in sedentary, stressed adults. Future research should consider ways to optimise adherence, and should investigate effects of Bikram yoga intervention in other populations at risk for stress-related illness. Australia New Zealand Clinical Trials Registry ACTRN12616000867493. Registered 04 July 2016. URL: http://www.anzctr.org.au/ACTRN12616000867493.aspx. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Therapeutic effect of intranasal evaporative cooling in patients with migraine: a pilot study.
Vanderpol, Jitka; Bishop, Barbara; Matharu, Manjit; Glencorse, Mark
2015-01-26
Cryotherapy is the most common non-pharmacological pain-relieving method. The aim of this pilot study was to ascertain whether intranasal evaporative cooling may be an effective intervention in an acute migraine attack. Studies have previously demonstrated effectiveness of a variety of cryotherapy approaches. Intranasal evaporative cooling due to vascular anatomy, allows the transfer of venous blood from nasal and paranasal mucous membranes to the dura mater, thereby providing an excellent anatomical basis for the cooling processes. We conducted a prospective, open-label, observational, pilot study. Twenty-eight patients who satisfied the International Classification of Headache Disorders (ICHD 2) diagnostic criteria for migraine were recruited. A total of 20 treatments were administered in 15 patients. All patients provided pain severity scores and migraine-associated symptoms severity scores (based on a 0-10 visual analogue scale, [VAS]). Out of the 20 treatments, intranasal evaporative cooling rendered patients' pain and symptoms free immediately after treatment, in 8 of the treatments (40%), a further 10 treatments (50%) resulted in partial pain relief (headache reduced from severe or moderate to mild) and partial symptoms relief. At 2 hours, 9 treatments (45%) provided full pain and symptoms relief, with a further 9 treatments (45%) resulting in partial pain and symptoms relief. At 24 hours, 10 treatments (50%) resulted in patients reporting pain and symptom freedom and 3 (15%) provided partial pain relief. In summary 13 patients (87%) had benefit from the treatment within 2 hours that was sustained at 24 hours. Intranasal evaporative cooling gave considerable benefit to patients with migraine, improving headache severity and migraine-associated symptoms. A further randomised, placebo controlled, double blinded, parallel clinical trial is required to further investigate the potential of this application. Clinicaltrials.gov registered trial, ClinicalTrials.gov Identifier: NCT01898455.
Randomized controlled trial of zonisamide for the treatment of refractory partial-onset seizures.
Faught, E; Ayala, R; Montouris, G G; Leppik, I E
2001-11-27
Zonisamide is a sulfonamide antiepilepsy drug with sodium and calcium channel-blocking actions. Experience in Japan and a previous European double-blind study have demonstrated its efficacy against partial-onset seizures. A randomized, double-blind, placebo-controlled trial enrolling 203 patients was conducted at 20 United States sites to assess zonisamide efficacy and dose response as adjunctive therapy for refractory partial-onset seizures. Zonisamide dosages were elevated by 100 mg/d each week. The study design allowed parallel comparisons with placebo for three dosages and a final crossover to 400 mg/d of zonisamide for all patients. The primary efficacy comparison was change in seizure frequency from a 4-week placebo baseline to weeks 8 through 12 on blinded therapy. At 400 mg/d, zonisamide reduced the median frequency of all seizures by 40.5% from baseline, compared with a 9% reduction (p = 0.0009) with placebo treatment, and produced a > or =50% seizure reduction (responder rate) in 42% of patients. A dosage of 100 mg/d produced a 20.5% reduction in median seizure frequency (p = 0.038 compared with placebo) and a dosage of 200 mg/d produced a 24.7% reduction in median seizure frequency (p = 0.004 compared with placebo). Dropouts from adverse events (10%) did not differ from placebo (8.2%, NS). The only adverse event differing significantly from placebo was weight loss, though somnolence, anorexia, and ataxia were slightly more common with zonisamide treatment. Serum zonisamide concentrations rose with increasing dose. Zonisamide is effective and well tolerated as an adjunctive agent for refractory partial-onset seizures. The minimal effective dosage was 100 mg/d, but 400 mg/d was the most effective dosage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, D; Cote, G; Mascolo-Fortin, J
2016-06-15
Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Fructose 2,6-bisphosphate and 6-phosphofructo-2-kinase during liver regeneration.
Rosa, J L; Ventura, F; Carreras, J; Bartrons, R
1990-01-01
Glycogen and fructose 2,6-bisphosphate levels in rat liver decreased quickly after partial hepatectomy. After 7 days the glycogen level was normalized and fructose 2,6-bisphosphate concentration still remained low. The 'active' (non-phosphorylated) form of 6-phosphofructo-2-kinase varied in parallel with fructose 2,6-bisphosphate levels, whereas the 'total' activity of the enzyme decreased only after 24 h, similarly to glucokinase. The response of 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase from hepatectomized rats (96 h) to sn-glycerol 3-phosphate and to cyclic AMP-dependent protein kinase was different from that of the enzyme from control animals and similar to that of the foetal isoenzyme. PMID:2173548
NASA Astrophysics Data System (ADS)
Regoutz, A.; Oropeza, F. E.; Poll, C. G.; Payne, D. J.; Palgrave, R. G.; Panaccione, G.; Borgatti, F.; Agrestini, S.; Utsumi, Y.; Tsuei, K. D.; Liao, Y. F.; Watson, G. W.; Egdell, R. G.
2016-03-01
The contributions of Sn 5s and Ti 4s states to the valence band electronic structure of Sn-doped anatase have been identified by hard X-ray photoelectron spectroscopy. The metal s state intensity is strongly enhanced relative to that of O 2p states at high photon energies due to matrix element effects when electrons are detected parallel to the direction of the polarisation vector of the synchrotron beam, but becomes negligible in the perpendicular direction. The experimental spectra in both polarisations are in good agreement with cross section and asymmetry parameter weighted partial densities of states derived from density functional theory calculations.
Method for preparing metallated filament-wound structures
Peterson, George R.
1979-01-01
Metallated graphite filament-wound structures are prepared by coating a continuous multi-filament carbon yarn with a metal carbide, impregnating the carbide coated yarn with a polymerizable carbon precursor, winding the resulting filament about a mandrel, partially curing the impregnation in air, subjecting the wound composite to heat and pressure to cure the carbon precursor, and thereafter heating the composite in a sizing die at a pressure loading of at least 1000 psi for graphitizing the carbonaceous material in the composite. The carbide in the composite coalesces into rod-like shapes which are disposed in an end-to-end relationship parallel with the filaments to provide resistance to erosion in abrasive laden atmospheres.
NASA Technical Reports Server (NTRS)
Meek, C. E.; Reid, I. M.
1984-01-01
It has been suggested that the velocities produced by the spaced antenna partial-reflection drift experiment may constitute a measure of the vertical oscillations due to short-period gravity waves rather than the mean horizontal flow. The contention is that the interference between say two scatterers, one of which is traveling upward, and the other down, will create a pattern which sweeps across the ground in the direction (or anti-parallel) of the wave propagation. Since the expected result, viz., spurious drift directions, is seldom, if ever, seen in spaced antenna drift velocities, this speculation is tested in an atmospheric model.
Mapping implicit spectral methods to distributed memory architectures
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Vanrosendale, John
1991-01-01
Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.
Kang, Hae Ji; Bennett, Shannon N.; Dizney, Laurie; Sumibcay, Laarni; Arai, Satoru; Ruedas, Luis A.; Song, Jin-Won; Yanagihara, Richard
2009-01-01
A genetically distinct hantavirus, designated Oxbow virus (OXBV), was detected in tissues of an American shrew mole (Neurotrichus gibbsii), captured in Gresham, Oregon, in September 2003. Pairwise analysis of full-length S- and M- and partial L-segment nucleotide and amino acid sequences of OXBV indicated low sequence similarity with rodent-borne hantaviruses. Phylogenetic analyses using maximum-likelihood and Bayesian methods, and host-parasite evolutionary comparisons, showed that OXBV and Asama virus, a hantavirus recently identified from the Japanese shrew mole (Urotrichus talpoides), were related to soricine shrew-borne hantaviruses from North America and Eurasia, respectively, suggesting parallel evolution associated with cross-species transmission. PMID:19394994
Internal motions of HII regions and giant HII regions
NASA Technical Reports Server (NTRS)
Chu, You-Hua; Kennicutt, Robert C., Jr.
1994-01-01
We report new echelle observations of the kinematics of 30 HII regions in the Large Magellanic Clouds (LMC), including the 30 Doradus giant HII region. All of the HII regions possess supersonic velocity dispersions, which can be attributed to a combination of turbulent motions and discrete velocity splitting produced by stellar winds and/or embedded supernova remnants (SNRs). The core of 30 Dor is unique, with a complex velocity structure that parallels its chaotic optical morphology. We use our calibrated echelle data to measure the physical properties and energetic requirements of these velocity structures. The most spectacular structures in 30 Dor are several fast expanding shells, which appear to be produced at least partially by SNRs.
Electron Heating at Kinetic Scales in Magnetosheath Turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chasapis, Alexandros; Matthaeus, W. H.; Parashar, T. N.
2017-02-20
We present a statistical study of coherent structures at kinetic scales, using data from the Magnetospheric Multiscale mission in the Earth’s magnetosheath. We implemented the multi-spacecraft partial variance of increments (PVI) technique to detect these structures, which are associated with intermittency at kinetic scales. We examine the properties of the electron heating occurring within such structures. We find that, statistically, structures with a high PVI index are regions of significant electron heating. We also focus on one such structure, a current sheet, which shows some signatures consistent with magnetic reconnection. Strong parallel electron heating coincides with whistler emissions at themore » edges of the current sheet.« less
Viscoelastic Postseismic Rebound to Strike-Slip Earthquakes in Regions of Oblique Plate Convergence
NASA Technical Reports Server (NTRS)
Cohen, Steven C.
1999-01-01
According to the slip partitioning concept, the trench parallel component of relative plate motion in regions of oblique convergence is accommodated by strike-slip faulting in the overriding continental lithosphere. The pattern of postseismic surface deformation due to viscoelastic flow in the lower crust and asthenosphere following a major earthquake on such a fault is modified from that predicted from the conventual elastic layer over viscoelastic halfspace model by the presence of the subducting slab. The predicted effects, such as a partial suppression of the postseismic velocities by 1 cm/yr or more immediately following a moderate to great earthquake, are potentially detectable using contemporary geodetic techniques.
Viriato: a Fourier-Hermite spectral code for strongly magnetised fluid-kinetic plasma dynamics
NASA Astrophysics Data System (ADS)
Loureiro, Nuno; Dorland, William; Fazendeiro, Luis; Kanekar, Anjor; Mallet, Alfred; Zocco, Alessandro
2015-11-01
We report on the algorithms and numerical methods used in Viriato, a novel fluid-kinetic code that solves two distinct sets of equations: (i) the Kinetic Reduced Electron Heating Model equations [Zocco & Schekochihin, 2011] and (ii) the kinetic reduced MHD (KRMHD) equations [Schekochihin et al., 2009]. Two main applications of these equations are magnetised (Alfvnénic) plasma turbulence and magnetic reconnection. Viriato uses operator splitting to separate the dynamics parallel and perpendicular to the ambient magnetic field (assumed strong). Along the magnetic field, Viriato allows for either a second-order accurate MacCormack method or, for higher accuracy, a spectral-like scheme. Perpendicular to the field Viriato is pseudo-spectral, and the time integration is performed by means of an iterative predictor-corrector scheme. In addition, a distinctive feature of Viriato is its spectral representation of the parallel velocity-space dependence, achieved by means of a Hermite representation of the perturbed distribution function. A series of linear and nonlinear benchmarks and tests are presented, with focus on 3D decaying kinetic turbulence. Work partially supported by Fundação para a Ciência e Tecnologia via Grants UID/FIS/50010/2013 and IF/00530/2013.
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
Islam, M Nurul; Fox, David; Guo, Rong; Enomoto, Takemi; Wang, Weidong
2010-05-01
The RecQL5 helicase is essential for maintaining genome stability and reducing cancer risk. To elucidate its mechanism of action, we purified a RecQL5-associated complex and identified its major component as RNA polymerase II (Pol II). Bioinformatics and structural modeling-guided mutagenesis revealed two conserved regions in RecQL5 as KIX and SRI domains, already known in transcriptional regulators for Pol II. The RecQL5-KIX domain binds both initiation (Pol IIa) and elongation (Pol IIo) forms of the polymerase, whereas the RecQL5-SRI domain interacts only with the elongation form. Fully functional RecQL5 requires both helicase activity and associations with the initiation polymerase, because mutants lacking either activity are partially defective in the suppression of sister chromatid exchange and resistance to camptothecin-induced DNA damage, and mutants lacking both activities are completely defective. We propose that RecQL5 promotes genome stabilization through two parallel mechanisms: by participation in homologous recombination-dependent DNA repair as a RecQ helicase and by regulating the initiation of Pol II to reduce transcription-associated replication impairment and recombination.
Increasing morphological complexity in multiple parallel lineages of the Crustacea
Adamowicz, Sarah J.; Purvis, Andy; Wills, Matthew A.
2008-01-01
The prospect of finding macroevolutionary trends and rules in the history of life is tremendously appealing, but very few pervasive trends have been found. Here, we demonstrate a parallel increase in the morphological complexity of most of the deep lineages within a major clade. We focus on the Crustacea, measuring the morphological differentiation of limbs. First, we show a clear trend of increasing complexity among 66 free-living, ordinal-level taxa from the Phanerozoic fossil record. We next demonstrate that this trend is pervasive, occurring in 10 or 11 of 12 matched-pair comparisons (across five morphological diversity indices) between extinct Paleozoic and related Recent taxa. This clearly differentiates the pattern from the effects of lineage sorting. Furthermore, newly appearing taxa tend to have had more types of limbs and a higher degree of limb differentiation than the contemporaneous average, whereas those going extinct showed higher-than-average limb redundancy. Patterns of contemporary species diversity partially reflect the paleontological trend. These results provide a rare demonstration of a large-scale and probably driven trend occurring across multiple independent lineages and influencing both the form and number of species through deep time and in the present day. PMID:18347335
A hydrodynamic mechanism for spontaneous formation of ordered drop arrays in confined shear flow
NASA Astrophysics Data System (ADS)
Singha, Sagnik; Zurita-Gotor, Mauricio; Loewenberg, Michael; Migler, Kalman; Blawzdziewicz, Jerzy
2017-11-01
It has been experimentally demonstrated that a drop monolayer driven by a confined shear flow in a Couette device can spontaneously arrange into a flow-oriented parallel chain microstructure. However, the hydrodynamic mechanism of this puzzling self-assembly phenomenon has so far eluded explanation. In a recent publication we suggested that the observed spontaneous drop ordering may arise from hydrodynamic interparticle interactions via a far-field quadrupolar Hele-Shaw flow associated with drop deformation. To verify this conjecture we have developed a simple numerical-simulation model that includes the far-field Hele-Shaw flow quadrupoles and a near-field short-range repulsion. Our simulations show that an initially disordered particle configuration self-organizes into a system of particle chains, similar to the experimentally observed drop-chain structures. The initial stage of chain formation is fast; subsequently, microstructural defects in a partially ordered system are removed by slow annealing, leading to an array of equally spaced parallel chains with a small number of defects. The microstructure evolution is analyzed using angular and spatial order parameters and correlation functions. Supported by NSF Grants No. CBET 1603627 and CBET 1603806.
Earth-to-orbit reusable launch vehicles: A comparative assessment
NASA Technical Reports Server (NTRS)
Chase, R. L.
1978-01-01
A representative set of space systems, functions, and missions for NASA and DoD from which launch vehicle requirements and characteristics was established as well as a set of air-breathing launch vehicles based on graduated technology capabilities corresponding to increasingly higher staging Mach numbers. The utility of the air-breathing launch vehicle candidates based on lift-off weight, performance, technology needs, and risk was assessed and costs were compared to alternative concepts. The results indicate that a fully reusable launch vehicle, whether two stage or one stage, could potentially reduce the cost per flight 60-80% compared to that for a partially reusable vehicle but would require advances in thermal protection system technology. A two-stage-to-orbit, parallel-lift vehicle with an air-breathing booster would cost approximately the same as a single-stage-to-orbit vehicle, but the former would have greater flexibility and a significantly reduced developmental risk. A twin-booster, subsonic-staged, parallel-lift vehicle represents the lowest system cost and developmental risk. However, if a large supersonic turbojet engine in the 350,000-N thrust class were available, supersonic staging would be preferred, and the investment in development would be returned in reduced program cost.
NASA Astrophysics Data System (ADS)
Throumoulopoulos, G. N.; Tasso, H.
2003-06-01
The equilibrium of an axisymmetric magnetically confined plasma with anisotropic resistivity and incompressible flows parallel to the magnetic field is investigated within the framework of the magnetohydrodynamic (MHD) theory by keeping the convective flow term in the momentum equation. It turns out that the stationary states are determined by a second-order elliptic partial differential equation for the poloidal magnetic flux function ψ along with a decoupled Bernoulli equation for the pressure identical in form with the respective ideal MHD equations; equilibrium consistent expressions for the resistivities η∥ and η⊥ parallel and perpendicular to the magnetic field are also derived from Ohm's and Faraday's laws. Unlike in the case of stationary states with isotropic resistivity and parallel flows [G. N. Throumoulopoulos and H. Tasso, J. Plasma Phys. 64, 601 (2000)] the equilibrium is compatible with nonvanishing poloidal current densities. Also, although exactly Spitzer resistivities either η∥(ψ) or η⊥(ψ) are not allowed, exact solutions with vanishing poloidal electric fields can be constructed with η∥ and η⊥ profiles compatible with roughly collisional resistivity profiles, i.e., profiles having a minimum close to the magnetic axis, taking very large values on the boundary and such that η⊥>η∥. For equilibria with vanishing flows satisfying the relation (dP/dψ)(dI2/dψ)>0, where P and I are the pressure and the poloidal current functions, the difference η⊥-η∥ for the reversed-field pinch scaling, Bp≈Bt, is nearly two times larger than that for the tokamak scaling, Bp≈0.1Bt (Bp and Bt are the poloidal and toroidal magnetic-field components). The particular resistive equilibrium solutions obtained in the present work, inherently free of—but not inconsistent with—Pfirsch-Schlüter diffusion, indicate that parallel flows might result in a reduction of the diffusion observed in magnetically confined plasmas.
Study of near SOL decay lengths in ASDEX Upgrade under attached and detached divertor conditions
NASA Astrophysics Data System (ADS)
Sun, H. J.; Wolfrum, E.; Kurzan, B.; Eich, T.; Lackner, K.; Scarabosio, A.; Paradela Pérez, I.; Kardaun, O.; Faitsch, M.; Potzel, S.; Stroth, U.; the ASDEX Upgrade Team
2017-10-01
A database with attached, partially detached and completely detached divertors has been constructed in ASDEX Upgrade discharges in both H-mode and L-mode plasmas with Thomson Scattering data suitable for the analysis of the upstream SOL electron profiles. By comparing upstream temperature decay width, {λ }{Te,u}, with the scaling of the SOL power decay width, {λ }{q\\parallel e}, based on the downstream IR measurements, it is found that a simple relation based on classical electron conduction can relate {λ }{Te,u} and {λ }{q\\parallel e} well. The combined dataset can be described by both a single scaling and a separate scaling for H-modes and L-modes. For the single scaling, a strong inverse dependence of, {λ }{Te,u} on the separatrix temperature, {T}e,u, is found, suggesting the classical parallel Spitzer-Harm conductivity as dominant mechanism controlling the SOL width in both L-mode and H-mode over a large set of plasma parameters. This dependence on {T}e,u explains why, for the same global plasma parameters, {λ }{q\\parallel e} in L-mode is approximately twice that in H-mode and under detached conditions, the SOL upstream electron profile broadens when the density reaches a critical value. Comparing the derived scaling from experimental data with power balance, gives the cross-field thermal diffusivity as {χ }\\perp \\propto {T}e{1/2}/{n}e, consistent with earlier studies on Compass-D, JET and Alcator C-Mod. However, the possibility of the separate scalings for different regimes cannot be excluded, which gives results similar to those previously reported for the H-mode, but here the wider SOL width for L-mode plasmas is explained simply by the larger premultiplying coefficient. The relative merits of the two scalings in representing the data and their theoretical implications are discussed.
Segmentation and automated measurement of chronic wound images: probability map approach
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Khansa, Ibrahim; Catignani, Karen; Gordillo, Gayle; Sen, Chandan K.; Gurcan, Metin N.
2014-03-01
estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
NASA Astrophysics Data System (ADS)
Filgueira, Ramón; Rosland, Rune; Grant, Jon
2011-11-01
Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.
A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.
Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J
2018-06-27
This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.
SmartAQnet: remote and in-situ sensing of urban air quality
NASA Astrophysics Data System (ADS)
Budde, Matthias; Riedel, Till; Beigl, Michael; Schäfer, Klaus; Emeis, Stefan; Cyrys, Josef; Schnelle-Kreis, Jürgen; Philipp, Andreas; Ziegler, Volker; Grimm, Hans; Gratza, Thomas
2017-10-01
Air quality and the associated subjective and health-related quality of life are among the important topics of urban life in our time. However, it is very difficult for many cities to take measures to accommodate today's needs concerning e.g. mobility, housing and work, because a consistent fine-granular data and information on causal chains is largely missing. This has the potential to change, as today, both large-scale basic data as well as new promising measuring approaches are becoming available. The project "SmartAQnet", funded by the German Federal Ministry of Transport and Digital Infrastructure (BMVI), is based on a pragmatic, data driven approach, which for the first time combines existing data sets with a networked mobile measurement strategy in the urban space. By connecting open data, such as weather data or development plans, remote sensing of influencing factors, and new mobile measurement approaches, such as participatory sensing with low-cost sensor technology, "scientific scouts" (autonomous, mobile smart dust measurement device that is auto-calibrated to a high-quality reference instrument within an intelligent monitoring network) and demand-oriented measurements by light-weight UAVs, a novel measuring and analysis concept is created within the model region of Augsburg, Germany. In addition to novel analytics, a prototypical technology stack is planned which, through modern analytics methods and Big Data and IoT technologies, enables application in a scalable way.
Improvement of the R-SWAT-FME framework to support multiple variables and multi-objective functions
Wu, Yiping; Liu, Shu-Guang
2014-01-01
Application of numerical models is a common practice in the environmental field for investigation and prediction of natural and anthropogenic processes. However, process knowledge, parameter identifiability, sensitivity, and uncertainty analyses are still a challenge for large and complex mathematical models such as the hydrological/water quality model, Soil and Water Assessment Tool (SWAT). In this study, the previously developed R program language-SWAT-Flexible Modeling Environment (R-SWAT-FME) was improved to support multiple model variables and objectives at multiple time steps (i.e., daily, monthly, and annually). This expansion is significant because there is usually more than one variable (e.g., water, nutrients, and pesticides) of interest for environmental models like SWAT. To further facilitate its easy use, we also simplified its application requirements without compromising its merits, such as the user-friendly interface. To evaluate the performance of the improved framework, we used a case study focusing on both streamflow and nitrate nitrogen in the Upper Iowa River Basin (above Marengo) in the United States. Results indicated that the R-SWAT-FME performs well and is comparable to the built-in auto-calibration tool in multi-objective model calibration. Overall, the enhanced R-SWAT-FME can be useful for the SWAT community, and the methods we used can also be valuable for wrapping potential R packages with other environmental models.
NASA Astrophysics Data System (ADS)
Celik, Koray
This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.
McGrath, Timothy; Fineman, Richard; Stirling, Leia
2018-06-08
Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nishioka, Koji; Suematsu, Takeshi; Murakami, Yoshinobu; Nagao, Masayuki; Sakata, Hiroshi
Feasibility of self-healing insulation system was studied. A silicone rubber without filler was mounted on a glass substrate with a needle electrode. An ac voltage with 4 kV in rms was applied. The voltage was cut off when the tree had propagated into 150 micrometers in length. After the cut-off, the partial discharge inception voltage was periodically observed. The partial discharge inception voltage had once reduced into as low as 2 kV. However, it gradually increased with time, and finally exceeded the tree inception voltage (4 kV) when 30 - 60 hours had passed. It was also observed by optical microscope that the tree gradually disappeared in parallel with the recovery of the partial discharge inception voltage. The same phenomenon was observed even if 1 kV ac voltage had been continuously applied during the process of the recovery. A simulation using a needle-shaped void was performed in order to clarify the mechanism of the self-healing effect. It was observed that the tip of the needle-shaped void gradually got wet with a liquid material. It would be the result of "bleed-out" of the low molecular component included in the rubber. The tip of the void was finally filled with the liquid, however, the rest of the needle-shaped void stayed without being filled. In this type of tree, it was suggested that the self-healing effect is expected if the diameter of the tree did not exceed ca. 5 micrometers.
The Role of Visual Area V4 in the Discrimination of Partially Occluded Shapes
Kosai, Yoshito; El-Shamayleh, Yasmine; Fyall, Amber M.
2014-01-01
The primate brain successfully recognizes objects, even when they are partially occluded. To begin to elucidate the neural substrates of this perceptual capacity, we measured the responses of shape-selective neurons in visual area V4 while monkeys discriminated pairs of shapes under varying degrees of occlusion. We found that neuronal shape selectivity always decreased with increasing occlusion level, with some neurons being notably more robust to occlusion than others. The responses of neurons that maintained their selectivity across a wider range of occlusion levels were often sufficiently sensitive to support behavioral performance. Many of these same neurons were distinctively selective for the curvature of local boundary features and their shape tuning was well fit by a model of boundary curvature (curvature-tuned neurons). A significant subset of V4 neurons also signaled the animal's upcoming behavioral choices; these decision signals had short onset latencies that emerged progressively later for higher occlusion levels. The time course of the decision signals in V4 paralleled that of shape selectivity in curvature-tuned neurons: shape selectivity in curvature-tuned neurons, but not others, emerged earlier than the decision signals. These findings provide evidence for the involvement of contour-based mechanisms in the segmentation and recognition of partially occluded objects, consistent with psychophysical theory. Furthermore, they suggest that area V4 participates in the representation of the relevant sensory signals and the generation of decision signals underlying discrimination. PMID:24948811
Partial Discharge Monitoring on Metal-Enclosed Switchgear with Distributed Non-Contact Sensors.
Zhang, Chongxing; Dong, Ming; Ren, Ming; Huang, Wenguang; Zhou, Jierui; Gao, Xuze; Albarracín, Ricardo
2018-02-11
Metal-enclosed switchgear, which are widely used in the distribution of electrical energy, play an important role in power distribution networks. Their safe operation is directly related to the reliability of power system as well as the power quality on the consumer side. Partial discharge detection is an effective way to identify potential faults and can be utilized for insulation diagnosis of metal-enclosed switchgear. The transient earth voltage method, an effective non-intrusive method, has substantial engineering application value for estimating the insulation condition of switchgear. However, the practical application effectiveness of TEV detection is not satisfactory because of the lack of a TEV detection application method, i.e., a method with sufficient technical cognition and analysis. This paper proposes an innovative online PD detection system and a corresponding application strategy based on an intelligent feedback distributed TEV wireless sensor network, consisting of sensing, communication, and diagnosis layers. In the proposed system, the TEV signal or status data are wirelessly transmitted to the terminal following low-energy signal preprocessing and acquisition by TEV sensors. Then, a central server analyzes the correlation of the uploaded data and gives a fault warning level according to the quantity, trend, parallel analysis, and phase resolved partial discharge pattern recognition. In this way, a TEV detection system and strategy with distributed acquisition, unitized fault warning, and centralized diagnosis is realized. The proposed system has positive significance for reducing the fault rate of medium voltage switchgear and improving its operation and maintenance level.
Jenkins, Michael; Grubert, Anna; Eimer, Martin
2017-11-01
It is generally assumed that during search for targets defined by a feature conjunction, attention is allocated sequentially to individual objects. We tested this hypothesis by tracking the time course of attentional processing biases with the N2pc component in tasks where observers searched for two targets defined by a colour/shape conjunction. In Experiment 1, two displays presented in rapid succession (100 ms or 10 ms SOA) each contained a target and a colour-matching or shape-matching distractor on opposite sides. Target objects in both displays elicited N2pc components of similar size that overlapped in time when the SOA was 10 ms, suggesting that attention was allocated in parallel to both targets. Analogous results were found in Experiment 2, where targets and partially matching distractors were both accompanied by an object without target-matching features. Colour-matching and shape-matching distractors also elicited N2pc components, and the target N2pc was initially identical to the sum of the two distractor N2pcs, suggesting that the initial phase of attentional object selection was guided independently by feature templates for target colour and shape. Beyond 230 ms after display onset, the target N2pc became superadditive, indicating that attentional selection processes now started to be sensitive to the presence of feature conjunctions. Results show that independent attentional selection processes can be activated in parallel by two target objects in situations where these objects are defined by a feature conjunction.
Adsorption and dissociation of molecular oxygen on α-Pu (0 2 0) surface: A density functional study
NASA Astrophysics Data System (ADS)
Wang, Jianguang; Ray, Asok K.
2011-09-01
Molecular and dissociative oxygen adsorptions on the α-Pu (0 2 0) surface have been systematically studied using the full-potential linearized augmented-plane-wave plus local orbitals (FP-LAPW+lo) basis method and the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional. Chemisorption energies have been optimized for the distance of the admolecule from the Pu surface and the bond length of O-O atoms for four adsorption sites and three approaches of O 2 admolecule to the (0 2 0) surface. Chemisorption energies have been calculated at the scalar relativistic level with no spin-orbit coupling (NSOC) and at the fully relativistic level with spin-orbit coupling (SOC). Dissociative adsorptions are found at the two horizontal approaches (O 2 is parallel to the surface and perpendicular/parallel to a lattice vector). Hor2 (O 2 is parallel to the surface and perpendicular to a lattice vector) approach at the one-fold top site is the most stable adsorption site, with chemisorption energies of 8.048 and 8.415 eV for the NSOC and SOC cases, respectively, and an OO separation of 3.70 Å. Molecular adsorption occurs at the Vert (O 2 is vertical to the surface) approach of each adsorption site. The calculated work functions and net spin magnetic moments, respectively, increase and decrease in all cases upon chemisorption compared to the clean surface. The partial charges inside the muffin-tins, the difference charge density distributions, and the local density of states have been used to investigate the Pu-admolecule electronic structures and bonding mechanisms.
A domain specific language for performance portable molecular dynamics algorithms
NASA Astrophysics Data System (ADS)
Saunders, William Robert; Grant, James; Müller, Eike Hermann
2018-03-01
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
Miao, Jun; Wong, Wilbur C K; Narayan, Sreenath; Wilson, David L
2011-11-01
Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (N(C)). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = N(C). K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B(1) inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support ("KARAOKE") algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding N(C). KARAOKE performed comparably to GRAPPA at low Rs. As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and/or high field strength.
Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Wilson, David L.
2011-01-01
Purpose: Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (NC). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = NC. Methods: K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B1 inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. Results: A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support (“KARAOKE”) algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding NC. KARAOKE performed comparably to GRAPPA at low Rs. Conclusions: As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and∕or high field strength. PMID:22047378
Krzysztof, Szwed; Wojciech, Pawliszak; Zbigniew, Serafin; Mariusz, Kowalewski; Remigiusz, Tomczyk; Damian, Perlinski; Magdalena, Szwed; Marta, Tomaszewska; Lech, Anisimowicz; Alina, Borkowska
2017-07-10
Neurological injuries remain a major concern following coronary artery bypass grafting (CABG) that offsets survival benefit of CABG over percutaneous coronary interventions. Among numerous efforts to combat this issue is the development of off-pump CABG (OPCABG) that obviates the need for extracorporeal circulation and is associated with improved neurological outcomes. The objective of this study is to examine whether the neuroprotective effect of OPCABG can be further pronounced by the use of two state-of-the-art operating techniques. In this randomised, controlled, investigator and patient blinded single-centre superiority trial with three parallel arms, a total of 360 patients will be recruited. They will be allocated in a 1:1:1 ratio to two treatment arms and one control arm. Treatment arms undergoing either aortic no-touch OPCABG or OPCABG with partial clamp applying carbon dioxide surgical field flooding will be compared against control arm undergoing OPCABG with partial clamp. The primary endpoint will be the appearance of new lesions on control brain MRI 3 days after surgery. Secondary endpoints will include the prevalence of new focal neurological deficits in the first 7 days after surgery, the occurrence of postoperative cognitive dysfunction at either 1 week or 3 months after surgery and the incidence of delirium in the first 7 days after surgery. Data will be analysed on intention-to-treat principles and a per protocol basis. Ethical approval has been granted for this study. Results will be disseminated through peer-reviewed media. NCT03074604; Pre-results. 10-Mar-2017 Original. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, W. A.; Koning, J. M.; Strozzi, D. J.
Here, we present radiation-hydrodynamic simulations of self-generated magnetic field in a hohlraum, which show an increased temperature in large regions of the underdense fill. Non-parallel gradients in electron density and temperature in a laser-heated plasma give rise to a self-generated field by the “Biermann battery” mechanism. Here, HYDRA simulations of three hohlraum designs on the National Ignition Facility are reported, which use a partial magnetohydrodynamic (MHD) description that includes the self-generated source term, resistive dissipation, and advection of the field due to both the plasma flow and the Nernst term. Anisotropic electron heat conduction parallel and perpendicular to the fieldmore » is included, but not the Righi-Leduc heat flux. The field strength is too small to compete significantly with plasma pressure, but affects plasma conditions by reducing electron heat conduction perpendicular to the field. Significant reductions in heat flux can occur, especially for high Z plasma, at modest values of the Hall parameter, Ω eτ ei≲1, where Ω e = eB/m ec and τ ei is the electron-ion collision time. The inclusion of MHD in the simulations leads to 1 keV hotter electron temperatures in the laser entrance hole and high- Z wall blowoff, which reduces inverse-bremsstrahlung absorption of the laser beam. This improves propagation of the inner beams pointed at the hohlraum equator, resulting in a symmetry shift of the resulting capsule implosion towards a more prolate shape. The time of peak x-ray production in the capsule shifts later by only 70 ps (within experimental uncertainty), but a decomposition of the hotspot shape into Legendre moments indicates a shift of P 2/P 0 by ~20%. As a result, this indicates that MHD cannot explain why simulated x-ray drive exceeds measured levels, but may be partially responsible for failures to correctly model the symmetry.« less
Simulation of self-generated magnetic fields in an inertial fusion hohlraum environment
Farmer, W. A.; Koning, J. M.; Strozzi, D. J.; ...
2017-05-09
Here, we present radiation-hydrodynamic simulations of self-generated magnetic field in a hohlraum, which show an increased temperature in large regions of the underdense fill. Non-parallel gradients in electron density and temperature in a laser-heated plasma give rise to a self-generated field by the “Biermann battery” mechanism. Here, HYDRA simulations of three hohlraum designs on the National Ignition Facility are reported, which use a partial magnetohydrodynamic (MHD) description that includes the self-generated source term, resistive dissipation, and advection of the field due to both the plasma flow and the Nernst term. Anisotropic electron heat conduction parallel and perpendicular to the fieldmore » is included, but not the Righi-Leduc heat flux. The field strength is too small to compete significantly with plasma pressure, but affects plasma conditions by reducing electron heat conduction perpendicular to the field. Significant reductions in heat flux can occur, especially for high Z plasma, at modest values of the Hall parameter, Ω eτ ei≲1, where Ω e = eB/m ec and τ ei is the electron-ion collision time. The inclusion of MHD in the simulations leads to 1 keV hotter electron temperatures in the laser entrance hole and high- Z wall blowoff, which reduces inverse-bremsstrahlung absorption of the laser beam. This improves propagation of the inner beams pointed at the hohlraum equator, resulting in a symmetry shift of the resulting capsule implosion towards a more prolate shape. The time of peak x-ray production in the capsule shifts later by only 70 ps (within experimental uncertainty), but a decomposition of the hotspot shape into Legendre moments indicates a shift of P 2/P 0 by ~20%. As a result, this indicates that MHD cannot explain why simulated x-ray drive exceeds measured levels, but may be partially responsible for failures to correctly model the symmetry.« less
Pişkin, Evangelia; Engin, Atilla; Özer, Füsun; Yüncü, Eren; Doğan, Şükrü Anıl; Togan, İnci
2013-01-01
In the present study, to contribute to the understanding of the evolutionary history of sheep, the mitochondrial (mt) DNA polymorphisms occurring in modern Turkish native domestic (n = 628), modern wild (Ovis gmelinii anatolica) (n = 30) and ancient domestic sheep from Oylum Höyük in Kilis (n = 33) were examined comparatively with the accumulated data in the literature. The lengths (75 bp/76 bp) of the second and subsequent repeat units of the mtDNA control region (CR) sequences differentiated the five haplogroups (HPGs) observed in the domestic sheep into two genetic clusters as was already implied by other mtDNA markers: the first cluster being composed of HPGs A, B, D and the second cluster harboring HPGs C, E. To manifest genetic relatedness between wild Ovis gmelinii and domestic sheep haplogroups, their partial cytochrome B sequences were examined together on a median-joining network. The two parallel but wider aforementioned clusters were observed also on the network of Ovis gmelenii individuals, within which domestic haplogroups were embedded. The Ovis gmelinii wilds of the present day appeared to be distributed on two partially overlapping geographic areas parallel to the genetic clusters that they belong to (the first cluster being in the western part of the overall distribution). Thus, the analyses suggested that the domestic sheep may be the products of two maternally distinct ancestral Ovis gmelinii populations. Furthermore, Ovis gmelinii anatolica individuals exhibited a haplotype of HPG A (n = 22) and another haplotype (n = 8) from the second cluster which was not observed among the modern domestic sheep. HPG E, with the newly observed members (n = 11), showed signs of expansion. Studies of ancient and modern mtDNA suggest that HPG C frequency increased in the Southeast Anatolia from 6% to 22% some time after the beginning of the Hellenistic period, 500 years Before Common Era (BCE). PMID:24349158
Demirci, Sevgin; Koban Baştanlar, Evren; Dağtaş, Nihan Dilşad; Pişkin, Evangelia; Engin, Atilla; Ozer, Füsun; Yüncü, Eren; Doğan, Sükrü Anıl; Togan, Inci
2013-01-01
In the present study, to contribute to the understanding of the evolutionary history of sheep, the mitochondrial (mt) DNA polymorphisms occurring in modern Turkish native domestic (n = 628), modern wild (Ovis gmelinii anatolica) (n = 30) and ancient domestic sheep from Oylum Höyük in Kilis (n = 33) were examined comparatively with the accumulated data in the literature. The lengths (75 bp/76 bp) of the second and subsequent repeat units of the mtDNA control region (CR) sequences differentiated the five haplogroups (HPGs) observed in the domestic sheep into two genetic clusters as was already implied by other mtDNA markers: the first cluster being composed of HPGs A, B, D and the second cluster harboring HPGs C, E. To manifest genetic relatedness between wild Ovis gmelinii and domestic sheep haplogroups, their partial cytochrome B sequences were examined together on a median-joining network. The two parallel but wider aforementioned clusters were observed also on the network of Ovis gmelenii individuals, within which domestic haplogroups were embedded. The Ovis gmelinii wilds of the present day appeared to be distributed on two partially overlapping geographic areas parallel to the genetic clusters that they belong to (the first cluster being in the western part of the overall distribution). Thus, the analyses suggested that the domestic sheep may be the products of two maternally distinct ancestral Ovis gmelinii populations. Furthermore, Ovis gmelinii anatolica individuals exhibited a haplotype of HPG A (n = 22) and another haplotype (n = 8) from the second cluster which was not observed among the modern domestic sheep. HPG E, with the newly observed members (n = 11), showed signs of expansion. Studies of ancient and modern mtDNA suggest that HPG C frequency increased in the Southeast Anatolia from 6% to 22% some time after the beginning of the Hellenistic period, 500 years Before Common Era (BCE).
Simulation of self-generated magnetic fields in an inertial fusion hohlraum environment
NASA Astrophysics Data System (ADS)
Farmer, W. A.; Koning, J. M.; Strozzi, D. J.; Hinkel, D. E.; Berzak Hopkins, L. F.; Jones, O. S.; Rosen, M. D.
2017-05-01
We present radiation-hydrodynamic simulations of self-generated magnetic field in a hohlraum, which show an increased temperature in large regions of the underdense fill. Non-parallel gradients in electron density and temperature in a laser-heated plasma give rise to a self-generated field by the "Biermann battery" mechanism. Here, HYDRA simulations of three hohlraum designs on the National Ignition Facility are reported, which use a partial magnetohydrodynamic (MHD) description that includes the self-generated source term, resistive dissipation, and advection of the field due to both the plasma flow and the Nernst term. Anisotropic electron heat conduction parallel and perpendicular to the field is included, but not the Righi-Leduc heat flux. The field strength is too small to compete significantly with plasma pressure, but affects plasma conditions by reducing electron heat conduction perpendicular to the field. Significant reductions in heat flux can occur, especially for high Z plasma, at modest values of the Hall parameter, Ωeτei≲1 , where Ωe=e B /mec and τei is the electron-ion collision time. The inclusion of MHD in the simulations leads to 1 keV hotter electron temperatures in the laser entrance hole and high-Z wall blowoff, which reduces inverse-bremsstrahlung absorption of the laser beam. This improves propagation of the inner beams pointed at the hohlraum equator, resulting in a symmetry shift of the resulting capsule implosion towards a more prolate shape. The time of peak x-ray production in the capsule shifts later by only 70 ps (within experimental uncertainty), but a decomposition of the hotspot shape into Legendre moments indicates a shift of P2/P0 by ˜20 % . This indicates that MHD cannot explain why simulated x-ray drive exceeds measured levels, but may be partially responsible for failures to correctly model the symmetry.
High Temperature Fusion Reactor Cooling Using Brayton Cycle Based Partial Energy Conversion
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.; Sawicki, Jerzy T.
2003-01-01
For some future space power systems using high temperature nuclear heat sources most of the output energy will be used in other than electrical form, and only a fraction of the total thermal energy generated will need to be converted to electrical work. The paper describes the conceptual design of such a partial energy conversion system, consisting of a high temperature fusion reactor operating in series with a high temperature radiator and in parallel with dual closed cycle gas turbine (CCGT) power systems, also referred to as closed Brayton cycle (CBC) systems, which are supplied with a fraction of the reactor thermal energy for conversion to electric power. Most of the fusion reactor's output is in the form of charged plasma which is expanded through a magnetic nozzle of the interplanetary propulsion system. Reactor heat energy is ducted to the high temperature series radiator utilizing the electric power generated to drive a helium gas circulation fan. In addition to discussing the thermodynamic aspects of the system design the authors include a brief overview of the gas turbine and fan rotor-dynamics and proposed bearing support technology along with performance characteristics of the three phase AC electric power generator and fan drive motor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghafarian, M.; Ariaei, A., E-mail: ariaei@eng.ui.ac.ir
The free vibration analysis of a multiple rotating nanobeams' system applying the nonlocal Eringen elasticity theory is presented. Multiple nanobeams' systems are of great importance in nano-optomechanical applications. At nanoscale, the nonlocal effects become non-negligible. According to the nonlocal Euler-Bernoulli beam theory, the governing partial differential equations are derived by incorporating the nonlocal scale effects. Assuming a structure of n parallel nanobeams, the vibration of the system is described by a coupled set of n partial differential equations. The method involves a change of variables to uncouple the equations and the differential transform method as an efficient mathematical technique tomore » solve the nonlocal governing differential equations. Then a number of parametric studies are conducted to assess the effect of the nonlocal scaling parameter, rotational speed, boundary conditions, hub radius, and the stiffness coefficients of the elastic interlayer media on the vibration behavior of the coupled rotating multiple-carbon-nanotube-beam system. It is revealed that the bending vibration of the system is significantly influenced by the rotational speed, elastic mediums, and the nonlocal scaling parameters. This model is validated by comparing the results with those available in the literature. The natural frequencies are in a reasonably good agreement with the reported results.« less
Mariano, Marina; Rodríguez, Francisco J.; Romero-Gomez, Pablo; Kozyreff, Gregory; Martorell, Jordi
2014-01-01
We propose the use of whispering gallery mode coupling in a novel configuration based on implementing a thin film cell on the backside of an array of parallel fibers. We performed numerical calculations using the parameters of a thin film organic cell which demonstrate that light coupling becomes more effective as the angle for the incident light relative to the fiber array normal increases up to an optimal angle close to 55 deg. At this angle the power conversion efficiency of the fiber array solar cell we propose becomes 30% times larger than the one from an equivalent planar cell configuration. We demonstrate that the micro fiber array solar cell we propose may perform an effective partial tracking of the sun movement for over 100 degrees without any mechanical help. In addition, in the event that such fiber array cell would be installed with the adequate orientation on a vertical façade, an optimal photon-to-charge conversion would be reached for sunlight incident at 55 deg with respect to the horizon line, very close to the yearly average position for the sun at Latitude of 40 deg.
High Temperature Fusion Reactor Cooling Using Brayton Cycle Based Partial Energy Conversion
NASA Astrophysics Data System (ADS)
Juhasz, Albert J.; Sawicki, Jerzy T.
2004-02-01
For some future space power systems using high temperature nuclear heat sources most of the output energy will be used in other than electrical form, and only a fraction of the total thermal energy generated will need to be converted to electrical work. The paper describes the conceptual design of such a ``partial energy conversion'' system, consisting of a high temperature fusion reactor operating in series with a high temperature radiator and in parallel with dual closed cycle gas turbine (CCGT) power systems, also referred to as closed Brayton cycle (CBC) systems, which are supplied with a fraction of the reactor thermal energy for conversion to electric power. Most of the fusion reactor's output is in the form of charged plasma which is expanded through a magnetic nozzle of the interplanetary propulsion system. Reactor heat energy is ducted to the high temperature series radiator utilizing the electric power generated to drive a helium gas circulation fan. In addition to discussing the thermodynamic aspects of the system design the authors include a brief overview of the gas turbine and fan rotor-dynamics and proposed bearing support technology along with performance characteristics of the three phase AC electric power generator and fan drive motor.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
NASA Astrophysics Data System (ADS)
Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Sallis, Shawn; Fuchs, Oliver; Blum, Monika; Weinhardt, Lothar; Heske, Clemens; Pepper, John; Jones, Michael; Brown, Adam; Spucces, Adrian; Chow, Ken; Smith, Brian; Glans, Per-Anders; Chen, Yanxue; Yan, Shishen; Pan, Feng; Piper, Louis F. J.; Denlinger, Jonathan; Guo, Jinghua; Hussain, Zahid; Chuang, Yi-De; Yang, Wanli
2017-03-01
An endstation with two high-efficiency soft x-ray spectrographs was developed at Beamline 8.0.1 of the Advanced Light Source, Lawrence Berkeley National Laboratory. The endstation is capable of performing soft x-ray absorption spectroscopy, emission spectroscopy, and, in particular, resonant inelastic soft x-ray scattering (RIXS). Two slit-less variable line-spacing grating spectrographs are installed at different detection geometries. The endstation covers the photon energy range from 80 to 1500 eV. For studying transition-metal oxides, the large detection energy window allows a simultaneous collection of x-ray emission spectra with energies ranging from the O K-edge to the Ni L-edge without moving any mechanical components. The record-high efficiency enables the recording of comprehensive two-dimensional RIXS maps with good statistics within a short acquisition time. By virtue of the large energy window and high throughput of the spectrographs, partial fluorescence yield and inverse partial fluorescence yield signals could be obtained for all transition metal L-edges including Mn. Moreover, the different geometries of these two spectrographs (parallel and perpendicular to the horizontal polarization of the beamline) provide contrasts in RIXS features with two different momentum transfers.
Fluid Line Evacuation and Freezing Experiments for Digital Radiator Concept
NASA Technical Reports Server (NTRS)
Berisford, Daniel F.; Birur, Gajanana C.; Miller, Jennifer R.; Sunada, Eric T.; Ganapathi, Gani B.; Stephan, Ryan; Johnson, Mark
2011-01-01
The digital radiator technology is one of three variable heat rejection technologies being investigated for future human-rated NASA missions. The digital radiator concept is based on a mechanically pumped fluid loop with parallel tubes carrying coolant to reject heat from the radiator surface. A series of valves actuate to start and stop fluid flow to di erent combinations of tubes, in order to vary the heat rejection capability of the radiator by a factor of 10 or more. When the flow in a particular leg is stopped, the fluid temperature drops and the fluid can freeze, causing damage or preventing flow from restarting. For this reason, the liquid in a stopped leg must be partially or fully evacuated upon shutdown. One of the challenges facing fluid evacuation from closed tubes arises from the vapor generated during pumping to low pressure, which can cause pump cavitation and incomplete evacuation. Here we present a series of laboratory experiments demonstrating fluid evacuation techniques to overcome these challenges by applying heat and pumping to partial vacuum. Also presented are results from qualitative testing of the freezing characteristics of several different candidate fluids, which demonstrate significant di erences in freezing properties, and give insight to the evacuation process.
Electron-impact-ionization dynamics of S F6
NASA Astrophysics Data System (ADS)
Bull, James N.; Lee, Jason W. L.; Vallance, Claire
2017-10-01
A detailed understanding of the dissociative electron ionization dynamics of S F6 is important in the modeling and tuning of dry-etching plasmas used in the semiconductor manufacture industry. This paper reports a crossed-beam electron ionization velocity-map imaging study on the dissociative ionization of cold S F6 molecules, providing complete, unbiased kinetic energy distributions for all significant product ions. Analysis of these distributions suggests that fragmentation following single ionization proceeds via formation of S F5 + or S F3 + ions that then dissociate in a statistical manner through loss of F atoms or F2, until most internal energy has been liberated. Similarly, formation of stable dications is consistent with initial formation of S F4 2 + ions, which then dissociate on a longer time scale. These data allow a comparison between electron ionization and photoionization dynamics, revealing similar dynamical behavior. In parallel with the ion kinetic energy distributions, the velocity-map imaging approach provides a set of partial ionization cross sections for all detected ionic fragments over an electron energy range of 50-100 eV, providing partial cross sections for S2 +, and enables the cross sections for S F4 2 + from S F+ to be resolved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing
In this paper, an endstation with two high-efficiency soft x-ray spectrographs was developed at Beamline 8.0.1 of the Advanced Light Source, Lawrence Berkeley National Laboratory. The endstation is capable of performing soft x-ray absorption spectroscopy, emission spectroscopy, and, in particular, resonant inelastic soft x-ray scattering (RIXS). Two slit-less variable line-spacing grating spectrographs are installed at different detection geometries. The endstation covers the photon energy range from 80 to 1500 eV. For studying transition-metal oxides, the large detection energy window allows a simultaneous collection of x-ray emission spectra with energies ranging from the O K-edge to the Ni L-edge without movingmore » any mechanical components. The record-high efficiency enables the recording of comprehensive two-dimensional RIXS maps with good statistics within a short acquisition time. By virtue of the large energy window and high throughput of the spectrographs, partial fluorescence yield and inverse partial fluorescence yield signals could be obtained for all transition metal L-edges including Mn. Finally and moreover, the different geometries of these two spectrographs (parallel and perpendicular to the horizontal polarization of the beamline) provide contrasts in RIXS features with two different momentum transfers.« less
Baum, A; Hansen, P W; Nørgaard, L; Sørensen, John; Mikkelsen, J D
2016-08-01
In this study, we introduce enzymatic perturbation combined with Fourier transform infrared (FTIR) spectroscopy as a concept for quantifying casein in subcritical heated skim milk using chemometric multiway analysis. Chymosin is a protease that cleaves specifically caseins. As a result of hydrolysis, all casein proteins clot to form a creamy precipitate, and whey proteins remain in the supernatant. We monitored the cheese-clotting reaction in real time using FTIR and analyzed the resulting evolution profiles to establish calibration models using parallel factor analysis and multiway partial least squares regression. Because we observed casein-specific kinetic changes, the retrieved models were independent of the chemical background matrix and were therefore robust against possible covariance effects. We tested the robustness of the models by spiking the milk solutions with whey, calcium, and cream. This method can be used at different stages in the dairy production chain to ensure the quality of the delivered milk. In particular, the cheese-making industry can benefit from such methods to optimize production control. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NDL-v2.0: A new version of the numerical differentiation library for parallel architectures
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.
2014-07-01
We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.
Lee, Kevin C; Stott, Matthew B; Dunfield, Peter F; Huttenhower, Curtis; McDonald, Ian R; Morgan, Xochitl C
2016-06-15
Chthonomonas calidirosea T49(T) is a low-abundance, carbohydrate-scavenging, and thermophilic soil bacterium with a seemingly disorganized genome. We hypothesized that the C. calidirosea genome would be highly responsive to local selection pressure, resulting in the divergence of its genomic content, genome organization, and carbohydrate utilization phenotype across environments. We tested this hypothesis by sequencing the genomes of four C. calidirosea isolates obtained from four separate geothermal fields in the Taupō Volcanic Zone, New Zealand. For each isolation site, we measured physicochemical attributes and defined the associated microbial community by 16S rRNA gene sequencing. Despite their ecological and geographical isolation, the genome sequences showed low divergence (maximum, 1.17%). Isolate-specific variations included single-nucleotide polymorphisms (SNPs), restriction-modification systems, and mobile elements but few major deletions and no major rearrangements. The 50-fold variation in C. calidirosea relative abundance among the four sites correlated with site environmental characteristics but not with differences in genomic content. Conversely, the carbohydrate utilization profiles of the C. calidirosea isolates corresponded to the inferred isolate phylogenies, which only partially paralleled the geographical relationships among the sample sites. Genomic sequence conservation does not entirely parallel geographic distance, suggesting that stochastic dispersal and localized extinction, which allow for rapid population homogenization with little restriction by geographical barriers, are possible mechanisms of C. calidirosea distribution. This dispersal and extinction mechanism is likely not limited to C. calidirosea but may shape the populations and genomes of many other low-abundance free-living taxa. This study compares the genomic sequence variations and metabolisms of four strains of Chthonomonas calidirosea, a rare thermophilic bacterium from the phylum Armatimonadetes It additionally compares the microbial communities and chemistry of each of the geographically distinct sites from which the four C. calidirosea strains were isolated. C. calidirosea was previously reported to possess a highly disorganized genome, but it was unclear whether this reflected rapid evolution. Here, we show that each isolation site has a distinct chemistry and microbial community, but despite this, the C. calidirosea genome is highly conserved across all isolation sites. Furthermore, genomic sequence differences only partially paralleled geographic distance, suggesting that C. calidirosea genotypes are not primarily determined by adaptive evolution. Instead, the presence of C. calidirosea may be driven by stochastic dispersal and localized extinction. This ecological mechanism may apply to many other low-abundance taxa. Copyright © 2016 Lee et al.
Lee, Kevin C.; Stott, Matthew B.; Dunfield, Peter F.; Huttenhower, Curtis; McDonald, Ian R.
2016-01-01
ABSTRACT Chthonomonas calidirosea T49T is a low-abundance, carbohydrate-scavenging, and thermophilic soil bacterium with a seemingly disorganized genome. We hypothesized that the C. calidirosea genome would be highly responsive to local selection pressure, resulting in the divergence of its genomic content, genome organization, and carbohydrate utilization phenotype across environments. We tested this hypothesis by sequencing the genomes of four C. calidirosea isolates obtained from four separate geothermal fields in the Taupō Volcanic Zone, New Zealand. For each isolation site, we measured physicochemical attributes and defined the associated microbial community by 16S rRNA gene sequencing. Despite their ecological and geographical isolation, the genome sequences showed low divergence (maximum, 1.17%). Isolate-specific variations included single-nucleotide polymorphisms (SNPs), restriction-modification systems, and mobile elements but few major deletions and no major rearrangements. The 50-fold variation in C. calidirosea relative abundance among the four sites correlated with site environmental characteristics but not with differences in genomic content. Conversely, the carbohydrate utilization profiles of the C. calidirosea isolates corresponded to the inferred isolate phylogenies, which only partially paralleled the geographical relationships among the sample sites. Genomic sequence conservation does not entirely parallel geographic distance, suggesting that stochastic dispersal and localized extinction, which allow for rapid population homogenization with little restriction by geographical barriers, are possible mechanisms of C. calidirosea distribution. This dispersal and extinction mechanism is likely not limited to C. calidirosea but may shape the populations and genomes of many other low-abundance free-living taxa. IMPORTANCE This study compares the genomic sequence variations and metabolisms of four strains of Chthonomonas calidirosea, a rare thermophilic bacterium from the phylum Armatimonadetes. It additionally compares the microbial communities and chemistry of each of the geographically distinct sites from which the four C. calidirosea strains were isolated. C. calidirosea was previously reported to possess a highly disorganized genome, but it was unclear whether this reflected rapid evolution. Here, we show that each isolation site has a distinct chemistry and microbial community, but despite this, the C. calidirosea genome is highly conserved across all isolation sites. Furthermore, genomic sequence differences only partially paralleled geographic distance, suggesting that C. calidirosea genotypes are not primarily determined by adaptive evolution. Instead, the presence of C. calidirosea may be driven by stochastic dispersal and localized extinction. This ecological mechanism may apply to many other low-abundance taxa. PMID:27060125
Viorica, Daniela; Jemna, Danut; Pintilescu, Carmen; Asandului, Mircea
2014-01-01
The objective of this paper is to verify the hypotheses presented in the literature on the causal relationship between inflation and its uncertainty, for the newest EU countries. To ensure the robustness of the results, in the study four models for inflation uncertainty are estimated in parallel: ARCH (1), GARCH (1,1), EGARCH (1,1,1) and PARCH (1,1,1). The Granger method is used to test the causality between two variables. The working hypothesis is that groups of countries with a similar political and economic background in 1990 and are likely to be characterized by the same causal relationship between inflation and inflation uncertainty. Empirical results partially confirm this hypothesis. C22, E31, E37.
Hydration of Caffeine at High Temperature by Neutron Scattering and Simulation Studies.
Tavagnacco, L; Brady, J W; Bruni, F; Callear, S; Ricci, M A; Saboungi, M L; Cesàro, A
2015-10-22
The solvation of caffeine in water is examined with neutron diffraction experiments at 353 K. The experimental data, obtained by taking advantage of isotopic H/D substitution in water, were analyzed by empirical potential structure refinement (EPSR) in order to extract partial structure factors and site-site radial distribution functions. In parallel, molecular dynamics (MD) simulations were carried out to interpret the data and gain insight into the intermolecular interactions in the solutions and the solvation process. The results obtained with the two approaches evidence differences in the individual radial distribution functions, although both confirm the presence of caffeine stacks at this temperature. The two approaches point to different accessibility of water to the caffeine sites due to different stacking configurations.
SASSYS pretest analysis of the THORS-SHRS experiments. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bordner, G.L.; Dunn, F.E.
The THORS Facility at ORNL was recently modified to allow the testing of two parallel 19-pin simulated fueled subassemblies under natural circulation conditions similar to those that might occur during a partial failure of the shutdown heat removal system (SHRS) of a liquid-metal fast breeder reactor. The planned experimental program included a series of tests at various inlet plenum temperatures to determine boiling threshold power levels and the power range for stable boiling during natural circulation operation. Pretest calculations were performed at ANL, which supplement those carried out at ORNL for the purposes of validating the SASSYS model in themore » natural circulation regime and of providing data which would be useful in planning the experiments.« less
Supramolecular hydrogen-bonding networks in bis(adeninium) phthalate phthalic acid 1.45-hydrate.
Sridhar, Balasubramanian; Ravikumar, Krishnan
2007-04-01
In the title compound, 2C(5)H(6)N(5)(+).C(8)H(4)O(4)(2-).C(8)H(6)O(4).1.45H(2)O, the asymmetric unit comprises two adeninium cations, two half phthalate anions with crystallographic C(2) symmetry, one neutral phthalic acid molecule, and one fully occupied and one partially occupied site (0.45) for water molecules. The adeninium cations form N-H...O hydrogen bonds with the phthalate anions. The cations also form infinite one-dimensional polymeric ribbons via N-H...N interactions. In the crystal packing, hydrogen-bonded columns of cations, anions and phthalate anions extend parallel to the c axis. The water molecules crosslink adjacent columns into hydrogen-bonded layers.
Stencils and problem partitionings: Their influence on the performance of multiple processor systems
NASA Technical Reports Server (NTRS)
Reed, D. A.; Adams, L. M.; Patrick, M. L.
1986-01-01
Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.
Preferred crystallographic orientation in the ice I ← II transformation and the flow of ice II
Bennett, K.; Wenk, H.-R.; Durham, W.B.; Stern, L.A.; Kirby, S.H.
1997-01-01
The preferred crystallographic orientation developed during the ice I ← II transformation and during the plastic flow of ice II was measured in polycrystalline deuterium oxide (D2O) specimens using low-temperature neutron diffraction. Samples partially transformed from ice I to II under a non-hydrostatic stress developed a preferred crystallographic orientation in the ice II. Samples of pure ice II transformed from ice I under a hydrostatic stress and then when compressed axially, developed a strong preferred orientation of compression axes parallel to (1010). A match to the observed preferred orientation using the viscoplastic self-consistent theory was obtained only when (1010) [0001] was taken as the predominant slip system in ice II.