Parallel fast multipole boundary element method applied to computational homogenization
NASA Astrophysics Data System (ADS)
Ptaszny, Jacek
2018-01-01
In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
Signal-domain optimization metrics for MPRAGE RF pulse design in parallel transmission at 7 tesla.
Gras, V; Vignaud, A; Mauconduit, F; Luong, M; Amadon, A; Le Bihan, D; Boulant, N
2016-11-01
Standard radiofrequency pulse design strategies focus on minimizing the deviation of the flip angle from a target value, which is sufficient but not necessary for signal homogeneity. An alternative approach, based directly on the signal, here is proposed for the MPRAGE sequence, and is developed in the parallel transmission framework with the use of the k T -points parametrization. The flip angle-homogenizing and the proposed methods were investigated numerically under explicit power and specific absorption rate constraints and tested experimentally in vivo on a 7 T parallel transmission system enabling real time local specific absorption rate monitoring. Radiofrequency pulse performance was assessed by a careful analysis of the signal and contrast between white and gray matter. Despite a slight reduction of the flip angle uniformity, an improved signal and contrast homogeneity with a significant reduction of the specific absorption rate was achieved with the proposed metric in comparison with standard pulse designs. The proposed joint optimization of the inversion and excitation pulses enables significant reduction of the specific absorption rate in the MPRAGE sequence while preserving image quality. The work reported thus unveils a possible direction to increase the potential of ultra-high field MRI and parallel transmission. Magn Reson Med 76:1431-1442, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
Boundedness and exponential convergence in a chemotaxis model for tumor invasion
NASA Astrophysics Data System (ADS)
Jin, Hai-Yang; Xiang, Tian
2016-12-01
We revisit the following chemotaxis system modeling tumor invasion {ut=Δu-∇ṡ(u∇v),x∈Ω,t>0,vt=Δv+wz,x∈Ω,t>0,wt=-wz,x∈Ω,t>0,zt=Δz-z+u,x∈Ω,t>0, in a smooth bounded domain Ω \\subset {{{R}}n}(n≥slant 1) with homogeneous Neumann boundary and initial conditions. This model was recently proposed by Fujie et al (2014 Adv. Math. Sci. Appl. 24 67-84) as a model for tumor invasion with the role of extracellular matrix incorporated, and was analyzed later by Fujie et al (2016 Discrete Contin. Dyn. Syst. 36 151-69), showing the uniform boundedness and convergence for n≤slant 3 . In this work, we first show that the {{L}∞} -boundedness of the system can be reduced to the boundedness of \\parallel u(\\centerdot,t){{\\parallel}{{L\\frac{n{4}+ɛ}}(Ω )}} for some ɛ >0 alone, and then, for n≥slant 4 , if the initial data \\parallel {{u}0}{{\\parallel}{{L\\frac{n{4}}}}} , \\parallel {{z}0}{{\\parallel}{{L\\frac{n{2}}}}} and \\parallel \
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
NASA Technical Reports Server (NTRS)
Quealy, Angela; Cole, Gary L.; Blech, Richard A.
1993-01-01
The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Achilles tendon shape and echogenicity on ultrasound among active badminton players.
Malliaras, P; Voss, C; Garau, G; Richards, P; Maffulli, N
2012-04-01
The relationship between Achilles tendon ultrasound abnormalities, including a spindle shape and heterogeneous echogenicity, is unclear. This study investigated the relationship between these abnormalities, tendon thickness, Doppler flow and pain. Sixty-one badminton players (122 tendons, 36 men, and 25 women) were recruited. Achilles tendon thickness, shape (spindle, parallel), echogenicity (heterogeneous, homogeneous) and Doppler flow (present or absent) were measured bilaterally with ultrasound. Achilles tendon pain (during or after activity over the last week) and pain and function [Victorian Institute of Sport Achilles Assessment (VISA-A)] were measured. Sixty-eight (56%) tendons were parallel with homogeneous echogenicity (normal), 22 (18%) were spindle shaped with homogeneous echogenicity, 16 (13%) were parallel with heterogeneous echogenicity and 16 (13%) were spindle shaped with heterogeneous echogenicity. Spindle shape was associated with self-reported pain (P<0.05). Heterogeneous echogenicity was associated with lower VISA-A scores than normal tendon (P<0.05). There was an ordinal relationship between normal tendon, parallel and heterogeneous and spindle shaped and heterogeneous tendons with regard to increasing thickness and likelihood of Doppler flow. Heterogeneous echogenicity with a parallel shape may be a physiological phase and may develop into heterogeneous echogenicity with a spindle shape that is more likely to be pathological. © 2010 John Wiley & Sons A/S.
A parallel bubble column system for the cultivation of phototrophic microorganisms.
Havel, Jan; Franco-Lara, Ezequiel; Weuster-Botz, Dirk
2008-07-01
An incubator with up to 16 parallel bubble columns was equipped with artificial light sources assuring a light supply with a homogenous light spectrum directly above the bioreactors. Cylindrical light reflecting tubes were positioned around every single bubble column to avoid light scattering effects and to redirect the light from the top onto the cylindrical outer glass surface of each bubble column. The light reflecting tubes were equipped with light intensity filters to control the total light intensity for every single photo-bioreactor. Parallel cultivations of the unicellular obligate phototrophic cyanobacterium, Synechococcus PCC7942, were studied under different constant light intensities ranging from 20 to 102 microE m(-2)s(-1) at a constant humidified air flow rate supplemented with CO(2).
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
Proof of Concept for the Rewrite Rule Machine: Interensemble Studies
1994-02-23
34 -,,, S2 •fbo fibo 0 1 Figure 1: Concurrent Rewriting of Fibonacci Expressions exploit a problem’s parallelism at several levels. We call this...property multigrain concurrency; it makes the RRM very well suited for solving not only homogeneous problems, but also complex, locally homogeneous but...interprocessor message passing over a network-is not well suited to data parallelism. A key goal of the RRM is to combine the best of these two approaches in a
Parallel Work of CO2 Ejectors Installed in a Multi-Ejector Module of Refrigeration System
NASA Astrophysics Data System (ADS)
Bodys, Jakub; Palacz, Michal; Haida, Michal; Smolka, Jacek; Nowak, Andrzej J.; Banasiak, Krzysztof; Hafner, Armin
2016-09-01
A performance analysis on of fixed ejectors installed in a multi-ejector module in a CO2 refrigeration system is presented in this study. The serial and the parallel work of four fixed-geometry units that compose the multi-ejector pack was carried out. The executed numerical simulations were performed with the use of validated Homogeneous Equilibrium Model (HEM). The computational tool ejectorPL for typical transcritical parameters at the motive nozzle were used in all the tests. A wide range of the operating conditions for supermarket applications in three different European climate zones were taken into consideration. The obtained results present the high and stable performance of all the ejectors in the multi-ejector pack.
Avdievich, Nikolai I.; Oh, Suk-Hoon; Hetherington, Hoby P.; Collins, Christopher M.
2010-01-01
Purpose To improve the homogeneity of transmit volume coils at high magnetic fields (≥ 4 T). Due to RF field/ tissue interactions at high fields, 4–8 T, the transmit profile from head-sized volume coils shows a distinctive pattern with relatively strong RF magnetic field B1 in the center of the brain. Materials and Methods In contrast to conventional volume coils at high field strengths, surface coil phased arrays can provide increased RF field strength peripherally. In theory, simultaneous transmission from these two devices could produce a more homogeneous transmission field. To minimize interactions between the phased array and the volume coil, counter rotating current (CRC) surface coils consisting of two parallel rings carrying opposite currents were used for the phased array. Results Numerical simulations and experimental data demonstrate that substantial improvements in transmit field homogeneity can be obtained. Conclusion We have demonstrated the feasibility of using simultaneous transmission with human head-sized volume coils and CRC phased arrays to improve homogeneity of the transmit RF B1 field for high-field MRI systems. PMID:20677280
A parallel reaction-transport model applied to cement hydration and microstructure development
NASA Astrophysics Data System (ADS)
Bullard, Jeffrey W.; Enjolras, Edith; George, William L.; Satterfield, Steven G.; Terrill, Judith E.
2010-03-01
A recently described stochastic reaction-transport model on three-dimensional lattices is parallelized and is used to simulate the time-dependent structural and chemical evolution in multicomponent reactive systems. The model, called HydratiCA, uses probabilistic rules to simulate the kinetics of diffusion, homogeneous reactions and heterogeneous phenomena such as solid nucleation, growth and dissolution in complex three-dimensional systems. The algorithms require information only from each lattice site and its immediate neighbors, and this localization enables the parallelized model to exhibit near-linear scaling up to several hundred processors. Although applicable to a wide range of material systems, including sedimentary rock beds, reacting colloids and biochemical systems, validation is performed here on two minerals that are commonly found in Portland cement paste, calcium hydroxide and ettringite, by comparing their simulated dissolution or precipitation rates far from equilibrium to standard rate equations, and also by comparing simulated equilibrium states to thermodynamic calculations, as a function of temperature and pH. Finally, we demonstrate how HydratiCA can be used to investigate microstructure characteristics, such as spatial correlations between different condensed phases, in more complex microstructures.
2015-01-01
We report on the theoretical analysis of equilibrium distances in real plane-parallel systems under the influence of Casimir and gravity forces at thermal equilibrium. Due to the balance between these forces, thin films of Teflon, silica, or polystyrene in a single-layer configuration and immersed in glycerol stand over a silicon substrate at certain stable or unstable positions depending on the material and the slab thickness. Hybrid systems containing silica and polystyrene, materials which display Casimir forces and equilibrium distances of opposite nature when considered individually, are analyzed in either bilayer arrangements or as composite systems made of a homogeneous matrix with small inclusions inside. For each configuration, equilibrium distances and their stability can be adjusted by fine-tuning of the volume occupied by each material. We find the specific conditions under which nanolevitation of realistic films should be observed. Our results indicate that thin films of real materials in plane-parallel configurations can be used to control suspension or stiction phenomena at the nanoscale. PMID:26405466
Mürtz, Petra; Kaschner, Marius; Träber, Frank; Kukuk, Guido M; Büdenbender, Sarah M; Skowasch, Dirk; Gieseke, Jürgen; Schild, Hans H; Willinek, Winfried A
2012-11-01
To evaluate the use of dual-source parallel RF excitation (TX) for diffusion-weighted whole-body MRI with background body signal suppression (DWIBS) at 3.0 T. Forty consecutive patients were examined on a clinical 3.0-T MRI system using a diffusion-weighted (DW) spin-echo echo-planar imaging sequence with a combination of short TI inversion recovery and slice-selective gradient reversal fat suppression. DWIBS of the neck (n=5), thorax (n=8), abdomen (n=6) and pelvis (n=21) was performed both with TX (2:56 min) and with standard single-source RF excitation (4:37 min). The quality of DW images and reconstructed inverted maximum intensity projections was visually judged by two readers (blinded to acquisition technique). Signal homogeneity and fat suppression were scored as "improved", "equal", "worse" or "ambiguous". Moreover, the apparent diffusion coefficient (ADC) values were measured in muscles, urinary bladder, lymph nodes and lesions. By the use of TX, signal homogeneity was "improved" in 25/40 and "equal" in 15/40 cases. Fat suppression was "improved" in 17/40 and "equal" in 23/40 cases. These improvements were statistically significant (p<0.001, Wilcoxon signed-rank test). In five patients, fluid-related dielectric shading was present, which improved remarkably. The ADC values did not significantly differ for the two RF excitation methods (p=0.630 over all data, pairwise Student's t-test). Dual-source parallel RF excitation improved image quality of DWIBS at 3.0 T with respect to signal homogeneity and fat suppression, reduced scan time by approximately one-third, and did not influence the measured ADC values. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2018-04-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
Creating Very True Quantum Algorithms for Quantum Energy Based Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc
2017-12-01
An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f(x) := s.x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = (x 1, … , x N ), x j ∈ R and the coefficients s = (s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.
New Parallel computing framework for radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
Study of the techniques feasible for food synthesis aboard a spacecraft
NASA Technical Reports Server (NTRS)
Weiss, A. H.
1972-01-01
Synthesis of sugars by Ca(OH)2 catalyzed formaldehyde condensation (the formose reaction) has produced branched carbohydrates that do not occur in nature. The kinetics and mechanisms of the homogeneously catalyzed autocatalytic condensation were studied and analogies between homogeneous and heterogeneous rate laws have been found. Aldol condensations proceed simultaneously with Cannizzaro and crossed-Cannizzaro reactions and Lobry de Bruyn-Van Eckenstein rearrangements. The separate steps as well as the interactions of this highly complex reaction system were elucidated. The system exhibits instabilities, competitive catalytic, mass action, and equilibrium phenomena, complexing, and parallel and consecutive reactions. Specific finding that have been made on the problem will be of interest for synthesizing sugars, both for sustained space flight and for large scale food manufacture. A contribution to methodology for studying complex catalyzed reactions and to understanding control of reaction selectivity was a broad goal of the project.
Simulation of dispersion in layered coastal aquifer systems
Reilly, T.E.
1990-01-01
A density-dependent solute-transport formulation is used to examine ground-water flow in layered coastal aquifers. The numerical experiments indicate that although the transition zone may be thought of as an impermeable 'sharp' interface with freshwater flow parallel to the transition zone in homogeneous aquifers, this is not the case for layered systems. Freshwater can discharge through the transition zone in the confining units. Further, for the best simulation of layered coastal aquifer systems, either a flow-direction-dependent dispersion formulation is required, or the dispersivities must change spatially to reflect the tight thin confining unit. ?? 1990.
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
Girst, S; Marx, C; Bräuer-Krisch, E; Bravin, A; Bartzsch, S; Oelfke, U; Greubel, C; Reindl, J; Siebenwirth, C; Zlobinskaya, O; Multhoff, G; Dollinger, G; Schmid, T E; Wilkens, J J
2015-09-01
The risk of developing normal tissue injuries often limits the radiation dose that can be applied to the tumour in radiation therapy. Microbeam Radiation Therapy (MRT), a spatially fractionated photon radiotherapy is currently tested at the European Synchrotron Radiation Facility (ESRF) to improve normal tissue protection. MRT utilizes an array of microscopically thin and nearly parallel X-ray beams that are generated by a synchrotron. At the ion microprobe SNAKE in Munich focused proton microbeams ("proton microchannels") are studied to improve normal tissue protection. Here, we comparatively investigate microbeam/microchannel irradiations with sub-millimetre X-ray versus proton beams to minimize the risk of normal tissue damage in a human skin model, in vitro. Skin tissues were irradiated with a mean dose of 2 Gy over the irradiated area either with parallel synchrotron-generated X-ray beams at the ESRF or with 20 MeV protons at SNAKE using four different irradiation modes: homogeneous field, parallel lines and microchannel applications using two different channel sizes. Normal tissue viability as determined in an MTT test was significantly higher after proton or X-ray microchannel irradiation compared to a homogeneous field irradiation. In line with these findings genetic damage, as determined by the measurement of micronuclei in keratinocytes, was significantly reduced after proton or X-ray microchannel compared to a homogeneous field irradiation. Our data show that skin irradiation using either X-ray or proton microchannels maintain a higher cell viability and DNA integrity compared to a homogeneous irradiation, and thus might improve normal tissue protection after radiation therapy. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos
2011-01-01
General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.
2012-08-01
techniques and STEAM imager. It couples the high-speed capability of the STEAM imager and differential phase contrast imaging of DIC / Nomarski microscopy...On 10 TPE chips, we obtained 9 homogenous and strong bonds, the failed bond being due to operator error and presence of air bubbles in the TPE...instruments, structural dynamics, and microelectromechanical systems (MEMS) via laser-scanning surface vibrometry , and observation of biomechanical motility
Rigorous vector wave propagation for arbitrary flat media
NASA Astrophysics Data System (ADS)
Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.
2017-08-01
Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.
Users manual for the Chameleon parallel programming tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.; Smith, B.
1993-06-01
Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highlymore » portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
First passage times for a tracer particle in single file diffusion and fractional Brownian motion.
Sanders, Lloyd P; Ambjörnsson, Tobias
2012-05-07
We investigate the full functional form of the first passage time density (FPTD) of a tracer particle in a single-file diffusion (SFD) system whose population is: (i) homogeneous, i.e., all particles having the same diffusion constant and (ii) heterogeneous, with diffusion constants drawn from a heavy-tailed power-law distribution. In parallel, the full FPTD for fractional Brownian motion [fBm-defined by the Hurst parameter, H ∈ (0, 1)] is studied, of interest here as fBm and SFD systems belong to the same universality class. Extensive stochastic (non-Markovian) SFD and fBm simulations are performed and compared to two analytical Markovian techniques: the method of images approximation (MIA) and the Willemski-Fixman approximation (WFA). We find that the MIA cannot approximate well any temporal scale of the SFD FPTD. Our exact inversion of the Willemski-Fixman integral equation captures the long-time power-law exponent, when H ≥ 1/3, as predicted by Molchan [Commun. Math. Phys. 205, 97 (1999)] for fBm. When H < 1/3, which includes homogeneous SFD (H = 1/4), and heterogeneous SFD (H < 1/4), the WFA fails to agree with any temporal scale of the simulations and Molchan's long-time result. SFD systems are compared to their fBm counter parts; and in the homogeneous system both scaled FPTDs agree on all temporal scales including also, the result by Molchan, thus affirming that SFD and fBm dynamics belong to the same universality class. In the heterogeneous case SFD and fBm results for heterogeneity-averaged FPTDs agree in the asymptotic time limit. The non-averaged heterogeneous SFD systems display a lack of self-averaging. An exponential with a power-law argument, multiplied by a power-law pre-factor is shown to describe well the FPTD for all times for homogeneous SFD and sub-diffusive fBm systems.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
NASA Astrophysics Data System (ADS)
Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.
2016-09-01
A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.
Preparation of Protein Samples for NMR Structure, Function, and Small Molecule Screening Studies
Acton, Thomas B.; Xiao, Rong; Anderson, Stephen; Aramini, James; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Kornhaber, Gregory; Lau, Jessica; Lee, Dong Yup; Liu, Gaohua; Maglaqui, Melissa; Ma, Lichung; Mao, Lei; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Shastry, Ritu; Swapna, G.V.T.; Tang, Yeufeng; Tong, Saichiu; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.
2014-01-01
In this chapter, we concentrate on the production of high quality protein samples for NMR studies. In particular, we provide an in-depth description of recent advances in the production of NMR samples and their synergistic use with recent advancements in NMR hardware. We describe the protein production platform of the Northeast Structural Genomics Consortium, and outline our high-throughput strategies for producing high quality protein samples for nuclear magnetic resonance (NMR) studies. Our strategy is based on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems and isotope enrichment in minimal media. We describe 96-well ligation-independent cloning and analytical expression systems, parallel preparative scale fermentation, and high-throughput purification protocols. The 6X-His affinity tag allows for a similar two-step purification procedure implemented in a parallel high-throughput fashion that routinely results in purity levels sufficient for NMR studies (> 97% homogeneity). Using this platform, the protein open reading frames of over 17,500 different targeted proteins (or domains) have been cloned as over 28,000 constructs. Nearly 5,000 of these proteins have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html), resulting in more than 950 new protein structures, including more than 400 NMR structures, deposited in the Protein Data Bank. The Northeast Structural Genomics Consortium pipeline has been effective in producing protein samples of both prokaryotic and eukaryotic origin. Although this paper describes our entire pipeline for producing isotope-enriched protein samples, it focuses on the major updates introduced during the last 5 years (Phase 2 of the National Institute of General Medical Sciences Protein Structure Initiative). Our advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are suitable for implementation in a large individual laboratory or by a small group of collaborating investigators for structural biology, functional proteomics, ligand screening and structural genomics research. PMID:21371586
Photometric studies of Saturn's ring and eclipses of the Galilean satellites
NASA Technical Reports Server (NTRS)
Brunk, W. E.
1972-01-01
Reliable data defining the photometric function of the Saturn ring system at visual wavelengths are interpreted in terms of a simple scattering model. To facilitate the analysis, new photographic photometry of the ring has been carried out and homogeneous measurements of the mean surface brightness are presented. The ring model adopted is a plane parallel slab of isotropically scattering particles; the single scattering albedo and the perpendicular optical thickness are both arbitrary. Results indicate that primary scattering is inadequate to describe the photometric properties of the ring: multiple scattering predominates for all angles of tilt with respect to the Sun and earth. In addition, the scattering phase function of the individual particles is significantly anisotropic: they scatter preferentially towards the sun. Photoelectric photometry of Ganymede during its eclipse by Jupiter indicate that neither a simple reflecting-layer model nor a semi-infinite homogeneous scattering model provides an adequate physical description of the Jupiter atmosphere.
Fujita, M; Ohta, H; Uezato, T
1981-01-01
Brush borders free of nuclei were isolated by repeated homogenization and centrifugation in iso-osmotic medium. They showed typical morphology under electron microscopy. The mean recovery and enrichment of alkaline phosphatase activity in the brush-border fraction were 50% and 17.5-fold respectively. gamma-Glutamyl transpeptidase showed a close parallelism with alkaline phosphatase and sucrase in subcellular distribution. Microvillar membranes were purified from isolated brush borders; they showed a further enrichment for alkaline phosphatase and were composed of homogeneous vesicles. Both brush-border and microvillar-membrane preparations were analysed for contamination by basolateral and endoplasmic-reticular membranes. Sodium dodecyl sulphate/polyacrylamide-gel electrophoresis of the microvillar-membrane preparation in six different systems revealed approx. 40 components in the mol.wt. range 15 000-232 000. They were grouped into seven major classes on the basis of molecular weight and electrophoretic patterns. Images PLATE 1 PLATE 2 PMID:7317008
A Homogenization Approach for Design and Simulation of Blast Resistant Composites
NASA Astrophysics Data System (ADS)
Sheyka, Michael
Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.
Use of multi-coil parallel-gap resonators for co-registration EPR/NMR imaging
NASA Astrophysics Data System (ADS)
Kawada, Yuuki; Hirata, Hiroshi; Fujii, Hirodata
2007-01-01
This article reports experimental investigations on the use of RF resonators for continuous-wave electron paramagnetic resonance (cw-EPR) and proton nuclear magnetic resonance (NMR) imaging. We developed a composite resonator system with multi-coil parallel-gap resonators for co-registration EPR/NMR imaging. The resonance frequencies of each resonator were 21.8 MHz for NMR and 670 MHz for EPR. A smaller resonator (22 mm in diameter) for use in EPR was placed coaxially in a larger resonator (40 mm in diameter) for use in NMR. RF magnetic fields in the composite resonator system were visualized by measuring a homogeneous 4-hydroxy-2,2,6,6-tetramethyl-piperidinooxy (4-hydroxy-TEMPO) solution in a test tube. A phantom of five tubes containing distilled water and 4-hydroxy-TEMPO solution was also measured to demonstrate the potential usefulness of this composite resonator system in biomedical science. An image of unpaired electrons was obtained for 4-hydroxy-TEMPO in three tubes, and was successfully mapped on the proton image for five tubes. Technical problems in the implementation of a composite resonator system are discussed with regard to co-registration EPR/NMR imaging for animal experiments.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Schmitter, Sebastian; Wu, Xiaoping; Auerbach, Edward J; Adriany, Gregor; Pfeuffer, Josef; Hamm, Michael; Uğurbil, Kâmil; van de Moortele, Pierre-François
2014-05-01
Ultrahigh magnetic fields of 7 T or higher have proven to significantly enhance the contrast in time-of-flight (TOF) imaging, one of the most commonly used non-contrast-enhanced magnetic resonance angiography techniques. Compared with lower field strength, however, the required radiofrequency (RF) power is increased at 7 T and the contrast obtained with a conventional head transmit RF coil is typically spatially heterogeneous.In this work, we addressed the contrast heterogeneity in multislab TOF acquisitions by optimizing the excitation flip angle homogeneity while constraining the RF power using 3-dimensional tailored RF pulses ("spokes") with a 16-channel parallel transmission system and a 16-channel transceiver head coil. We investigated in simulations and in vivo experiments flip angle homogeneity and angiogram quality with a same 3-slab TOF protocol for different excitations including 1-, 2-, and 3-spoke parallel transmit RF pulses and compared the results with a circularly polarized (CP) phase setting similar to a birdcage excitation. B1 and B0 calibration maps were obtained in multiple slices, and the RF pulse for each slab was designed on the basis of 3 calibration slices located at the bottom/middle/top of each slab, respectively. By design, all excitations were computed to generate the same total RF power for the same flip angle. In 8 subjects, we quantified the excitation homogeneity and the distribution of the RF power to individual channels. In addition, we investigated the consequences of local flip angle variations at the junction between adjacent slabs as well as the impact of ΔB0 on image quality. The flip angle heterogeneity, expressed as the coefficient of variation, averaged over all volunteers and all slabs could be reduced from 29.4% for CP mode excitation to 14.1% for a 1-spoke excitation and to 7.3% for 2-spoke excitations. A separate detailed analysis shows only a marginal improvement for 3-spoke compared with the 2-spoke excitation. The strong improvement in flip angle homogeneity particularly impacted the junction between adjacent TOF slabs, where significant residual artifacts observed with 1-spoke excitation could be efficiently mitigated using a 2-spoke excitation with same RF power and same average flip angle. Although the total RF power is maintained at the same level than that in CP mode excitation, the energy distribution is fairly heterogeneous through the 16 transmit channels for 1- and 2-spoke excitations, with the highest energy for 1 channel being a factor of 2.4 (1 spoke) and 2.2 (2 spokes) higher than that in CP mode. In vivo experiments demonstrated the necessity for including ΔB0 spatial variations during 2-spoke RF pulse design, particularly in areas with strong local susceptibility variations such as the lower frontal lobe. Significant improvement in excitation fidelity leading to improved TOF contrast, particularly in the brain periphery, as well as smooth slab transitions can be achieved with 2-spoke excitation while maintaining the same excitation energy as that in CP mode. These results suggest that expanding parallel transmit methods, including the use of multidimensional spatially selective excitation, will also be very beneficial for other techniques, such as perfusion imaging.
Schmitter, Sebastian; Wu, Xiaoping; Auerbach, Edward J.; Adriany, Gregor; Pfeuffer, Josef; Hamm, Michael; Ugurbil, Kamil; Van de Moortele, Pierre-Francois
2015-01-01
Objectives Ultra high magnetic fields of ≥7 Tesla have proven to significantly enhance the contrast in time-of-flight (TOF) imaging, one of the most commonly used non-contrast enhanced MR angiography techniques. Compared to lower field strength, however, the required RF power is increased at 7 Tesla and the contrast obtained with a conventional head transmit RF coil is typically spatially heterogeneous. In this work we address the contrast heterogeneity in multi-slab TOF acquisitions by optimizing the excitation flip angle homogeneity while constraining the RF power using 3D tailored RF pulses (“spokes”) with a 16 channel parallel transmission system and a 16 channel transceiver head coil. Material and Methods We investigate in simulations and in-vivo experiments flip angle homogeneity and angiogram quality with a same 3-slab TOF protocol for different excitations including 1-, 2- and 3-spoke parallel transmit RF pulses and compare the results with a circularly polarized (CP) phase setting similar to a birdcage excitation. B1 and B0 calibration maps were obtained in multiple slices and the RF pulse for each slab was designed based on 3 calibration slices located at the bottom/middle/top of each slab respectively. By design, all excitations were computed to generate the same total RF power for the same flip angle. In 8 subjects we quantify the excitation homogeneity and the distribution of the RF power to individual channels. In addition, we investigate the consequences of local flip angle variations at the junction between adjacent slabs as well as the impact of ΔB0 on image quality. Results The flip angle heterogeneity, expressed as the coefficient of variation, averaged over all volunteers and all slabs could be reduced from 29.4% for CP mode excitation to 14.1% for a 1-spoke excitation and to 7.3% for a 2-spoke excitations. A separate detailed analysis shows only a marginal improvement for 3-spoke compared to the 2-spoke excitation. The strong improvement in flip angle homogeneity particularly impacted the junction between adjacent TOF slabs, where significant residual artifacts observed with 1-spoke excitation could be efficiently mitigated using a 2-spoke excitation with same RF power and same average flip angle. Even though the total RF power is maintained at the same level than in CP mode excitation, the energy distribution is fairly heterogeneous through the 16 transmit channels for 1- and 2-spoke excitation, with the highest energy for one channel being a factor of 2.4 (1-spoke) and 2.2 (2-spoke) higher than in CP mode. In vivo experiments demonstrate the necessity of including ΔB0 spatial variations during 2-spoke RF pulse design, in particular in areas with strong local susceptibility variations such as the lower frontal lobe. Conclusion Significant improvement in excitation fidelity leading to improved TOF contrast, particularly in the brain periphery, as well as smooth slab transitions can be achieved with 2-spoke excitation while maintaining the same excitation energy as in CP mode. These results suggest that expanding parallel transmit methods, including the use of multi-dimensional spatially selective excitation, will also be very beneficial for other techniques, such as perfusion imaging. PMID:24598439
Parallel Excitation for B-Field Insensitive Fat-Saturation Preparation
Heilman, Jeremiah A.; Derakhshan, Jamal D.; Riffe, Matthew J.; Gudino, Natalia; Tkach, Jean; Flask, Chris A.; Duerk, Jeffrey L.; Griswold, Mark A.
2016-01-01
Multichannel transmission has the potential to improve many aspects of MRI through a new paradigm in excitation. In this study, multichannel transmission is used to address the effects that variations in B0 homogeneity have on fat-saturation preparation through the use of the frequency, phase, and amplitude degrees of freedom afforded by independent transmission channels. B1 homogeneity is intrinsically included via use of coil sensitivities in calculations. A new method, parallel excitation for B-field insensitive fat-saturation preparation, can achieve fat saturation in 89% of voxels with Mz ≤ 0.1 in the presence of ±4 ppm B0 variation, where traditional CHESS methods achieve only 40% in the same conditions. While there has been much progress to apply multichannel transmission at high field strengths, particular focus is given here to application of these methods at 1.5 T. PMID:22247080
Ilyin, S E; Plata-Salamán, C R
2000-02-15
Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
Slow dynamics in glasses: A comparison between theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, J. C.
Minimalist theories of complex systems are broadly of two kinds: mean field and axiomatic. So far, all theories of complex properties absent from simple systems and intrinsic to glasses are axiomatic. Stretched Exponential Relaxation (SER) is the prototypical complex temporal property of glasses, discovered by Kohlrausch 150 years ago, and now observed almost universally in microscopically homogeneous, complex nonequilibrium materials, including luminescent electronic Coulomb glasses. A critical comparison of alternative axiomatic theories with both numerical simulations and experiments strongly favors channeled dynamical trap models over static percolative or energy landscape models. The topics discussed cover those reported since the author'smore » review article in 1996, with an emphasis on parallels between channel bifurcation in electronic and molecular relaxation.« less
NASA Technical Reports Server (NTRS)
Denning, Peter J.; Tichy, Walter F.
1990-01-01
Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.
A software architecture for multidisciplinary applications: Integrating task and data parallelism
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.
Concurrent Collections (CnC): A new approach to parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knobe, Kathleen
2010-05-07
A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicitymore » or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipment’s Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.« less
Concurrent Collections (CnC): A new approach to parallel programming
Knobe, Kathleen
2018-04-16
A common approach in designing parallel languages is to provide some high level handles to manipulate the use of the parallel platform. This exposes some aspects of the target platform, for example, shared vs. distributed memory. It may expose some but not all types of parallelism, for example, data parallelism but not task parallelism. This approach must find a balance between the desire to provide a simple view for the domain expert and provide sufficient power for tuning. This is hard for any given architecture and harder if the language is to apply to a range of architectures. Either simplicity or power is lost. Instead of viewing the language design problem as one of providing the programmer with high level handles, we view the problem as one of designing an interface. On one side of this interface is the programmer (domain expert) who knows the application but needs no knowledge of any aspects of the platform. On the other side of the interface is the performance expert (programmer or program) who demands maximal flexibility for optimizing the mapping to a wide range of target platforms (parallel / serial, shared / distributed, homogeneous / heterogeneous, etc.) but needs no knowledge of the domain. Concurrent Collections (CnC) is based on this separation of concerns. The talk will present CnC and its benefits. About the speaker. Kathleen Knobe has focused throughout her career on parallelism especially compiler technology, runtime system design and language design. She worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 designing compilers for a wide range of parallel platforms for Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipmentâs Cambridge Research Lab (CRL). She stayed through the DEC/Compaq/HP mergers and when CRL was acquired and absorbed by Intel. She currently works in the Software and Services Group / Technology Pathfinding and Innovation.
A permanent MRI magnet for magic angle imaging having its field parallel to the poles.
McGinley, John V M; Ristic, Mihailo; Young, Ian R
2016-10-01
A novel design of open permanent magnet is presented, in which the magnetic field is oriented parallel to the planes of its poles. The paper describes the methods whereby such a magnet can be designed with a field homogeneity suitable for Magnetic Resonance Imaging (MRI). Its primary purpose is to take advantage of the Magic Angle effect in MRI of human extremities, particularly the knee joint, by being capable of rotating the direction of the main magnetic field B0 about two orthogonal axes around a stationary subject and achieve all possible angulations. The magnet comprises a parallel pair of identical profiled arrays of permanent magnets backed by a flat steel yoke such that access in lateral directions is practical. The paper describes the detailed optimization procedure from a target 150mm DSV to the achievement of a measured uniform field over a 130mm DSV. Actual performance data of the manufactured magnet, including shimming and a sample image, is presented. The overall magnet system mounting mechanism is presented, including two orthogonal axes of rotation of the magnet about its isocentre. Copyright © 2016 Elsevier Inc. All rights reserved.
222Rn transport in a fractured crystalline rock aquifer: Results from numerical simulations
Folger, P.F.; Poeter, E.; Wanty, R.B.; Day, W.; Frishman, D.
1997-01-01
Dissolved 222Rn concentrations in ground water from a small wellfield underlain by fractured Middle Proterozoic Pikes Peak Granite southwest of Denver, Colorado range from 124 to 840 kBq m-3 (3360-22700 pCi L-1). Numerical simulations of flow and transport between two wells show that differences in equivalent hydraulic aperture of transmissive fractures, assuming a simplified two-fracture system and the parallel-plate model, can account for the different 222Rn concentrations in each well under steady-state conditions. Transient flow and transport simulations show that 222Rn concentrations along the fracture profile are influenced by 222Rn concentrations in the adjoining fracture and depend on boundary conditions, proximity of the pumping well to the fracture intersection, transmissivity of the conductive fractures, and pumping rate. Non-homogeneous distribution (point sources) of 222Rn parent radionuclides, uranium and 226Ra, can strongly perturb the dissolved 222Rn concentrations in a fracture system. Without detailed information on the geometry and hydraulic properties of the connected fracture system, it may be impossible to distinguish the influence of factors controlling 222Rn distribution or to determine location of 222Rn point sources in the field in areas where ground water exhibits moderate 222Rn concentrations. Flow and transport simulations of a hypothetical multifracture system consisting of ten connected fractures, each 10 m in length with fracture apertures ranging from 0.1 to 1.0 mm, show that 222Rn concentrations at the pumping well can vary significantly over time. Assuming parallel-plate flow, transmissivities of the hypothetical system vary over four orders of magnitude because transmissivity varies with the cube of fracture aperture. The extreme hydraulic heterogeneity of the simple hypothetical system leads to widely ranging 222Rn values, even assuming homogeneous distribution of uranium and 226Ra along fracture walls. Consequently, it is concluded that 222Rn concentrations vary, not only with the geometric and stress factors noted above, but also according to local fracture aperture distribution, local groundwater residence time, and flux of 222Rn from parent radionuclides along fracture walls.
Schmitter, Sebastian; DelaBarre, Lance; Wu, Xiaoping; Greiser, Andreas; Wang, Dingxin; Auerbach, Edward J; Vaughan, J Thomas; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2013-11-01
Higher signal to noise ratio (SNR) and improved contrast have been demonstrated at ultra-high magnetic fields (≥7 Tesla [T]) in multiple targets, often with multi-channel transmit methods to address the deleterious impact on tissue contrast due to spatial variations in B1 (+) profiles. When imaging the heart at 7T, however, respiratory and cardiac motion, as well as B0 inhomogeneity, greatly increase the methodological challenge. In this study we compare two-spoke parallel transmit (pTX) RF pulses with static B1 (+) shimming in cardiac imaging at 7T. Using a 16-channel pTX system, slice-selective two-spoke pTX pulses and static B1 (+) shimming were applied in cardiac CINE imaging. B1 (+) and B0 mapping required modified cardiac triggered sequences. Excitation homogeneity and RF energy were compared in different imaging orientations. Two-spoke pulses provide higher excitation homogeneity than B1 (+) shimming, especially in the more challenging posterior region of the heart. The peak value of channel-wise RF energy was reduced, allowing for a higher flip angle, hence increased tissue contrast. Image quality with two-spoke excitation proved to be stable throughout the entire cardiac cycle. Two-spoke pTX excitation has been successfully demonstrated in the human heart at 7T, with improved image quality and reduced RF pulse energy when compared with B1 (+) shimming. Copyright © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xinyu; Strickland, Daniel J.; Derlet, Peter M.
We report on the use of quantitative in situ microcompression experiments in a scanning electron microscope to systematically investigate the effect of self-ion irradiation damage on the full plastic response of <111> Ni. In addition to the well-known irradiationinduced increases in the yield and flow strengths with increasing dose, we measure substantial changes in plastic flow intermittency behavior, manifested as stress drops accompanying energy releases as the driven material transits critical states. At low irradiation doses, the magnitude of stress drops reduces relative to the unirradiated material and plastic slip proceeds on multiple slip systems, leading to quasi-homogeneous plastic flow.more » In contrast, highly irradiated specimens exhibit pronounced shear localization on parallel slip planes, which we ascribe to the onset of defect free channels normally seen in bulk irradiated materials. Our in situ testing system and approach allows for a quantitative study of the energy release and dynamics associated with defect free channel formation and subsequent localization. As a result, this study provides fundamental insight to the nature of interactions between mobile dislocations and irradiation-mediated and damage-dependent defect structures.« less
Cybulski, Olgierd; Jakiela, Slawomir; Garstecki, Piotr
2015-12-01
The simplest microfluidic network (a loop) comprises two parallel channels with a common inlet and a common outlet. Recent studies that assumed a constant cross section of the channels along their length have shown that the sequence of droplets entering the left (L) or right (R) arm of the loop can present either a uniform distribution of choices (e.g., RLRLRL...) or long sequences of repeated choices (RRR...LLL), with all the intermediate permutations being dynamically equivalent and virtually equally probable to be observed. We use experiments and computer simulations to show that even small variation of the cross section along channels completely shifts the dynamics either into the strong preference for highly grouped patterns (RRR...LLL) that generate system-size oscillations in flow or just the opposite-to patterns that distribute the droplets homogeneously between the arms of the loop. We also show the importance of noise in the process of self-organization of the spatiotemporal patterns of droplets. Our results provide guidelines for rational design of systems that reproducibly produce either grouped or homogeneous sequences of droplets flowing in microfluidic networks.
Zhao, Xinyu; Strickland, Daniel J.; Derlet, Peter M.; ...
2015-02-11
We report on the use of quantitative in situ microcompression experiments in a scanning electron microscope to systematically investigate the effect of self-ion irradiation damage on the full plastic response of <111> Ni. In addition to the well-known irradiationinduced increases in the yield and flow strengths with increasing dose, we measure substantial changes in plastic flow intermittency behavior, manifested as stress drops accompanying energy releases as the driven material transits critical states. At low irradiation doses, the magnitude of stress drops reduces relative to the unirradiated material and plastic slip proceeds on multiple slip systems, leading to quasi-homogeneous plastic flow.more » In contrast, highly irradiated specimens exhibit pronounced shear localization on parallel slip planes, which we ascribe to the onset of defect free channels normally seen in bulk irradiated materials. Our in situ testing system and approach allows for a quantitative study of the energy release and dynamics associated with defect free channel formation and subsequent localization. As a result, this study provides fundamental insight to the nature of interactions between mobile dislocations and irradiation-mediated and damage-dependent defect structures.« less
Chow, Tze-Show
1988-04-22
A photon calorimeter is provided that comprises a laminar substrate that is uniform in density and homogeneous in atomic composition. A plasma-sprayed coating, that is generally uniform in density and homogeneous in atomic composition within the proximity of planes that are parallel to the surfaces of the substrate, is applied to either one or both sides of the laminar substrate. The plasma-sprayed coatings may be very efficiently spectrally tailored in atomic number. Thermocouple measuring junctions, are positioned within the plasma-sprayed coatings. The calorimeter is rugged, inexpensive, and equilibrates in temperature very rapidly. 4 figs.
NASA Astrophysics Data System (ADS)
Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.
2018-03-01
A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Morphology and anisotropy of thin conductive inkjet printed lines of single-walled carbon nanotubes
NASA Astrophysics Data System (ADS)
Torres-Canas, Fernando; Blanc, Christophe; Mašlík, Jan; Tahir, Said; Izard, Nicolas; Karasahin, Senguel; Castellani, Mauro; Dammasch, Matthias; Zamora-Ledezma, Camilo; Anglaret, Eric
2017-03-01
We show that the properties of thin conductive inkjet printed lines of single-walled carbon nanotubes (SWCNT) can be greatly tuned, using only a few deposition parameters. The morphology, anisotropy and electrical resistivity of single-stroke printed lines are studied as a function of ink concentration and drop density. An original method based on coupled profilometry-Raman measurements is developed to determine the height, mass, orientational order and density profiles of SWCNT across the printed lines with a micrometric lateral resolution. Height profiles can be tuned from ‘rail tracks’ (twin parallel lines) to layers of homogeneous thickness by controlling nanotube concentration and drop density. In all samples, the nanotubes are strongly oriented parallel to the line axis at the edges of the lines, and the orientational order decreases continuously towards the center of the lines. The resistivity of ‘rail tracks’ is significantly larger than that of homogeneous deposits, likely because of large amounts of electrical dead-ends.
Ganesh, D; Nagarajan, G; Ganesan, S
2014-01-01
In parallel to the interest in renewable fuels, there has also been increased interest in homogeneous charge compression ignition (HCCI) combustion. HCCI engines are being actively developed because they have the potential to be highly efficient and to produce low emissions. Even though HCCI has been researched extensively, few challenges still exist. These include controlling the combustion at higher loads and the formation of a homogeneous mixture. To obtain better homogeneity, in the present investigation external mixture formation method was adopted, in which the fuel vaporiser was used to achieve excellent HCCI combustion in a single cylinder air-cooled direct injection diesel engine. In continuation of our previous works, in the current study a vaporised jatropha methyl ester (JME) was mixed with air to form a homogeneous mixture and inducted into the cylinder during the intake stroke to analyze the combustion, emission and performance characteristics. To control the early ignition of JME vapor-air mixture, cooled (30 °C) Exhaust gas recirculation (EGR) technique was adopted. The experimental result shows 81% reduction in NOx and 72% reduction in smoke emission.
Shibuta, Yasushi; Sakane, Shinji; Miyoshi, Eisuke; Okita, Shin; Takaki, Tomohiro; Ohno, Munekazu
2017-04-05
Can completely homogeneous nucleation occur? Large scale molecular dynamics simulations performed on a graphics-processing-unit rich supercomputer can shed light on this long-standing issue. Here, a billion-atom molecular dynamics simulation of homogeneous nucleation from an undercooled iron melt reveals that some satellite-like small grains surrounding previously formed large grains exist in the middle of the nucleation process, which are not distributed uniformly. At the same time, grains with a twin boundary are formed by heterogeneous nucleation from the surface of the previously formed grains. The local heterogeneity in the distribution of grains is caused by the local accumulation of the icosahedral structure in the undercooled melt near the previously formed grains. This insight is mainly attributable to the multi-graphics processing unit parallel computation combined with the rapid progress in high-performance computational environments.Nucleation is a fundamental physical process, however it is a long-standing issue whether completely homogeneous nucleation can occur. Here the authors reveal, via a billion-atom molecular dynamics simulation, that local heterogeneity exists during homogeneous nucleation in an undercooled iron melt.
Robertson, Susan J; Leonard, Jane; Chamberlain, Alex J
2010-08-01
A 16-year-old boy presented with a number of asymptomatic pigmented macules on the volar aspect of his index fingers. Dermoscopy of each macule revealed a parallel ridge pattern of homogenous reddish-brown pigment. We propose that these lesions were induced by repetitive trauma from a Sony PlayStation 3 (Sony Corporation, Tokyo, Japan) vibration feedback controller. The lesions completely resolved following abstinence from gaming over a number of weeks. Although the parallel ridge pattern is typically the hallmark for early acral lentiginous melanoma, it may be observed in a limited number of benign entities, including subcorneal haematoma.
Radiative transfer in spherical shell atmospheres. 2: Asymmetric phase functions
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Adams, C. N.
1977-01-01
The effects are investigated of sphericity on the radiation reflected from a planet with a homogeneous, conservative scattering atmosphere of optical thicknesses of 0.25 and 1.0. A Henyey-Greenstein phase function with asymmetry factors of 0.5 and 0.7 is considered. Significant differences were found when these results were compared with the plane-parallel calculations. Also large violations of the reciprocity theorem, which is only true for plane-parallel calculations, were noted. Results are presented for the radiance versus height distributions as a function of planetary phase angle.
Haas, H; Lange, A; Schlaak, M
1987-01-01
Using isoelectric focusing (IEF) with immunoblotting, we have analysed serum immunoglobulins of 15 lung cancer patients on cytotoxic chemotherapy. In five of the patients homogeneous immunoglobulins were found which appeared between 9 and 18 months after beginning of treatment and were monoclonal in two and oligoclonal in three cases. These abnormalities were only partially shown by zonal electrophoresis with immunofixation and not detected by immune electrophoresis. Examination of 10 normal and 10 myeloma sera by the three techniques in parallel confirmed the competence and sensitivity of IEF with immunoblotting in detecting homogeneous immunoglobulins. Thus, this method provides a valuable tool for investigating an abnormal regulation of the immunoglobulin synthesis. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:3325203
Numerical Treatment of the Boltzmann Equation for Self-Propelled Particle Systems
NASA Astrophysics Data System (ADS)
Thüroff, Florian; Weber, Christoph A.; Frey, Erwin
2014-10-01
Kinetic theories constitute one of the most promising tools to decipher the characteristic spatiotemporal dynamics in systems of actively propelled particles. In this context, the Boltzmann equation plays a pivotal role, since it provides a natural translation between a particle-level description of the system's dynamics and the corresponding hydrodynamic fields. Yet, the intricate mathematical structure of the Boltzmann equation substantially limits the progress toward a full understanding of this equation by solely analytical means. Here, we propose a general framework to numerically solve the Boltzmann equation for self-propelled particle systems in two spatial dimensions and with arbitrary boundary conditions. We discuss potential applications of this numerical framework to active matter systems and use the algorithm to give a detailed analysis to a model system of self-propelled particles with polar interactions. In accordance with previous studies, we find that spatially homogeneous isotropic and broken-symmetry states populate two distinct regions in parameter space, which are separated by a narrow region of spatially inhomogeneous, density-segregated moving patterns. We find clear evidence that these three regions in parameter space are connected by first-order phase transitions and that the transition between the spatially homogeneous isotropic and polar ordered phases bears striking similarities to liquid-gas phase transitions in equilibrium systems. Within the density-segregated parameter regime, we find a novel stable limit-cycle solution of the Boltzmann equation, which consists of parallel lanes of polar clusters moving in opposite directions, so as to render the overall symmetry of the system's ordered state nematic, despite purely polar interactions on the level of single particles.
Chow, Tze-Show
1989-01-01
A photon calorimeter (20, 40) is provided that comprises a laminar substrate (10, 22, 42) that is uniform in density and homogeneous in atomic composition. A plasma-sprayed coating (28, 48, 52), that is generally uniform in density and homogeneous in atomic composition within the proximity of planes that are parallel to the surfaces of the substrate, is applied to either one or both sides of the laminar substrate. The plasma-sprayed coatings may be very efficiently spectrally tailored in atomic number. Thermocouple measuring junctions (30, 50, 54) are positioned within the plasma-sprayed coatings. The calorimeter is rugged, inexpensive, and equilibrates in temperature very rapidly.
Photochromic molecular implementations of universal computation.
Chaplin, Jack C; Krasnogor, Natalio; Russell, Noah A
2014-12-01
Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation. Previously we have demonstrated how registers, logic gates and logic circuits can be implemented, unconventionally, with a biocompatible molecular switch, NitroBIPS, embedded in a polymer matrix. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes in a manner that is dependent on its molecular form. Thus, one possible application of this type of unconventional computing is to embed computational processes into biological systems. Here we expand on our earlier proof-of-principle work and demonstrate that universal computation can be implemented using NitroBIPS. We have previously shown that spatially localised computational elements, including registers and logic gates, can be produced. We explain how parallel registers can be implemented, then demonstrate an application of parallel registers in the form of Turing machine tapes, and demonstrate both parallel registers and logic circuits in the form of elementary cellular automata. The Turing machines and elementary cellular automata utilise the same samples and same hardware to implement their registers, logic gates and logic circuits; and both represent examples of universal computing paradigms. This shows that homogenous photochromic computational devices can be dynamically repurposed without invasive reconfiguration. The result represents an important, necessary step towards demonstrating the general feasibility of interfacial computation embedded in biological systems or other unconventional materials and environments. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Transmission Index Research of Parallel Manipulators Based on Matrix Orthogonal Degree
NASA Astrophysics Data System (ADS)
Shao, Zhu-Feng; Mo, Jiao; Tang, Xiao-Qiang; Wang, Li-Ping
2017-11-01
Performance index is the standard of performance evaluation, and is the foundation of both performance analysis and optimal design for the parallel manipulator. Seeking the suitable kinematic indices is always an important and challenging issue for the parallel manipulator. So far, there are extensive studies in this field, but few existing indices can meet all the requirements, such as simple, intuitive, and universal. To solve this problem, the matrix orthogonal degree is adopted, and generalized transmission indices that can evaluate motion/force transmissibility of fully parallel manipulators are proposed. Transmission performance analysis of typical branches, end effectors, and parallel manipulators is given to illustrate proposed indices and analysis methodology. Simulation and analysis results reveal that proposed transmission indices possess significant advantages, such as normalized finite (ranging from 0 to 1), dimensionally homogeneous, frame-free, intuitive and easy to calculate. Besides, proposed indices well indicate the good transmission region and relativity to the singularity with better resolution than the traditional local conditioning index, and provide a novel tool for kinematic analysis and optimal design of fully parallel manipulators.
Introducing Differential Equations Students to the Fredholm Alternative--In Staggered Doses
ERIC Educational Resources Information Center
Savoye, Philippe
2011-01-01
The development, in an introductory differential equations course, of boundary value problems in parallel with initial value problems and the Fredholm Alternative. Examples are provided of pairs of homogeneous and nonhomogeneous boundary value problems for which existence and uniqueness issues are considered jointly. How this heightens students'…
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
Wu, Xiaoping; Adriany, Gregor; Ugurbil, Kamil; Van de Moortele, Pierre-Francois
2013-01-01
Successful implementation of homogeneous slice-selective RF excitation in the human brain at 9.4T using 16-channel parallel transmission (pTX) is demonstrated. A novel three-step pulse design method incorporating fast real-time measurement of eddy current induced B0 variations as well as correction of resulting phase errors during excitation is described. To demonstrate the utility of the proposed method, phantom and in-vivo experiments targeting a uniform excitation in an axial slice were conducted using two-spoke pTX pulses. Even with the pre-emphasis activated, eddy current induced B0 variations with peak-to-peak values greater than 4 kHz were observed on our system during the rapid switches of slice selective gradients. This large B0 variation, when not corrected, resulted in drastically degraded excitation fidelity with the coefficient of variation (CV) of the flip angle calculated for the region of interest being large (~ 12% in the phantom and ~ 35% in the brain). By comparison, excitation fidelity was effectively restored, and satisfactory flip angle uniformity was achieved when using the proposed method, with the CV value reduced to ~ 3% in the phantom and ~ 8% in the brain. Additionally, experimental results were in good agreement with the numerical predictions obtained from Bloch simulations. Slice-selective flip angle homogenization in the human brain at 9.4T using 16-channel 3D spoke pTX pulses is achievable despite of large eddy current induced excitation phase errors; correcting for the latter was critical in this success.
Price, Anthony N.; Padormo, Francesco; Hajnal, Joseph V.; Malik, Shaihan J.
2017-01-01
Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 +) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a ‘sequence‐level’ optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady‐state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight‐channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single‐channel operation, a mean‐squared‐error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. PMID:28195684
Beqiri, Arian; Price, Anthony N; Padormo, Francesco; Hajnal, Joseph V; Malik, Shaihan J
2017-06-01
Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 + ) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a 'sequence-level' optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady-state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight-channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single-channel operation, a mean-squared-error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
Theory and modeling of atmospheric turbulence, part 1
NASA Technical Reports Server (NTRS)
1984-01-01
The cascade transfer which is the only function to describe the mode coupling as the result of the nonlinear hydrodynamic state of turbulence is discussed. A kinetic theory combined with a scaling procedure was developed. The transfer function governs the non-linear mode coupling in strong turbulence. The master equation is consistent with the hydrodynamical system that describes the microdynamic state of turbulence and has the advantages to be homogeneous and have fewer nonlinear terms. The modes are scaled into groups to decipher the governing transport processes and statistical characteristics. An equation of vorticity transport describes the microdynamic state of two dimensional, isotropic and homogeneous, geostrophic turbulence. The equation of evolution of the macrovorticity is derived from group scaling in the form of the Fokker-Planck equation with memory. The microdynamic state of turbulence is transformed into the Liouville equation to derive the kinetic equation of the singlet distribution in turbulence. The collision integral contains a memory, which is analyzed with pair collision and the multiple collision. Two other kinetic equations are developed in parallel for the propagator and the transition probability for the interaction among the groups.
Entangling and disentangling many-electron quantum systems with an electric field
NASA Astrophysics Data System (ADS)
Sajjan, Manas; Head-Marsden, Kade; Mazziotti, David A.
2018-06-01
We show that the electron correlation of a molecular system can be enhanced or diminished through the application of a homogeneous electric field antiparallel or parallel to the system's intrinsic dipole moment. More generally, we prove that any external stimulus that significantly changes the expectation value of a one-electron operator with nondegenerate minimum and maximum eigenvalues can be used to control the degree of a molecule's electron correlation. Computationally, the effect is demonstrated in HeH+, MgH+, BH, HCN, H2O , HF, formaldehyde, and a fluorescent dye. Furthermore, we show in calculations with an array of formaldehyde (CH2O ) molecules that the field can control not only the electron correlation of a single formaldehyde molecule but also the entanglement among formaldehyde molecules. The quantum control of correlation and entanglement has potential applications in the design of molecules with tunable properties and the stabilization of qubits in quantum computations.
The effect of cosmic-ray acceleration on supernova blast wave dynamics
NASA Astrophysics Data System (ADS)
Pais, M.; Pfrommer, C.; Ehlert, K.; Pakmor, R.
2018-05-01
Non-relativistic shocks accelerate ions to highly relativistic energies provided that the orientation of the magnetic field is closely aligned with the shock normal (quasi-parallel shock configuration). In contrast, quasi-perpendicular shocks do not efficiently accelerate ions. We model this obliquity-dependent acceleration process in a spherically expanding blast wave setup with the moving-mesh code AREPO for different magnetic field morphologies, ranging from homogeneous to turbulent configurations. A Sedov-Taylor explosion in a homogeneous magnetic field generates an oblate ellipsoidal shock surface due to the slower propagating blast wave in the direction of the magnetic field. This is because of the efficient cosmic ray (CR) production in the quasi-parallel polar cap regions, which softens the equation of state and increases the compressibility of the post-shock gas. We find that the solution remains self-similar because the ellipticity of the propagating blast wave stays constant in time. This enables us to derive an effective ratio of specific heats for a composite of thermal gas and CRs as a function of the maximum acceleration efficiency. We finally discuss the behavior of supernova remnants expanding into a turbulent magnetic field with varying coherence lengths. For a maximum CR acceleration efficiency of about 15 per cent at quasi-parallel shocks (as suggested by kinetic plasma simulations), we find an average efficiency of about 5 per cent, independent of the assumed magnetic coherence length.
NASA Astrophysics Data System (ADS)
Guillemoteau, Julien; Tronicke, Jens
2015-07-01
For near surface geophysical surveys, small-fixed offset loop-loop electromagnetic induction (EMI) sensors are usually placed parallel to the ground surface (i.e., both loops are at the same height above ground). In this study, we evaluate the potential of making measurements with a system that is not parallel to the ground; i.e., by positioning the system at different inclinations with respect to ground surface. First, we present the Maxwell theory for inclined magnetic dipoles over a homogeneous half space. By analyzing the sensitivities of such configurations, we show that varying the angle of the system would result in improved imaging capabilities. For example, we show that acquiring data with a vertical system allows detection of a conductive body with a better lateral resolution compared to data acquired using standard horizontal configurations. The synthetic responses are presented for a heterogeneous medium and compared to field data acquired in the historical Park Sanssouci in Potsdam, Germany. After presenting a detailed sensitivity analysis and synthetic examples of such ground conductivity measurements, we suggest a new strategy of acquisition that allows to better estimate the true distribution of electrical conductivity using instruments with a fixed, small offset between the loops. This strategy is evaluated using field data collected at a well-constrained test-site in Horstwalde (Germany). Here, the target buried utility pipes are best imaged using vertical system configurations demonstrating the potential of our approach for typical applications.
Point interactions, metamaterials, and PT-symmetry
NASA Astrophysics Data System (ADS)
Mostafazadeh, Ali
2016-05-01
We express the boundary conditions for TE and TM waves at the interfaces of an infinite planar slab of homogeneous metamaterial as certain point interactions and use them to compute the transfer matrix of the system. This allows us to demonstrate the omnidirectional reflectionlessness of Veselago's slab for waves of arbitrary wavelength, reveal the translational and reflection symmetry of this slab, explore the laser threshold condition and coherent perfect absorption for active negative-index metamaterials, introduce a point interaction modeling phase-conjugation, determine the corresponding antilinear transfer matrix, and offer a simple proof of the equivalence of Veselago's slab with a pair of parallel phase-conjugating plates. We also study the connection between certain optical setups involving metamaterials and a class of PT-symmetric quantum systems defined on wedge-shape contours in the complex plane. This provides a physical interpretation for the latter.
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
Wang, Bei; Ethier, Stephane; Tang, William; ...
2017-06-29
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Evaluation of the power consumption of a high-speed parallel robot
NASA Astrophysics Data System (ADS)
Han, Gang; Xie, Fugui; Liu, Xin-Jun
2018-06-01
An inverse dynamic model of a high-speed parallel robot is established based on the virtual work principle. With this dynamic model, a new evaluation method is proposed to measure the power consumption of the robot during pick-and-place tasks. The power vector is extended in this method and used to represent the collinear velocity and acceleration of the moving platform. Afterward, several dynamic performance indices, which are homogenous and possess obvious physical meanings, are proposed. These indices can evaluate the power input and output transmissibility of the robot in a workspace. The distributions of the power input and output transmissibility of the high-speed parallel robot are derived with these indices and clearly illustrated in atlases. Furtherly, a low-power-consumption workspace is selected for the robot.
1992-04-01
Proceedings of Tri-Service Data Fusion Symposium, Johns Hopkins University, May 1989. 39. F. Rosenblatt. Principles of Neurodynamics : Perceptrons and the...104 47. David E. Rummelhart and James L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition , volume 1. The
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Inhomogeneous cosmology and backreaction: Current status and future prospects
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof; Korzyński, Mikołaj
Astronomical observations reveal hierarchical structures in the universe, from galaxies, groups of galaxies, clusters and superclusters, to filaments and voids. On the largest scales, it seems that some kind of statistical homogeneity can be observed. As a result, modern cosmological models are based on spatially homogeneous and isotropic solutions of the Einstein equations, and the evolution of the universe is approximated by the Friedmann equations. In parallel to standard homogeneous cosmology, the field of inhomogeneous cosmology and backreaction is being developed. This field investigates whether small scale inhomogeneities via nonlinear effects can backreact and alter the properties of the universe on its largest scales, leading to a non-Friedmannian evolution. This paper presents the current status of inhomogeneous cosmology and backreaction. It also discusses future prospects of the field of inhomogeneous cosmology, which is based on a survey of 50 academics working in the field of inhomogeneous cosmology.
Magneto-capillary dynamics of amphiphilic Janus particles at curved liquid interfaces.
Fei, Wenjie; Driscoll, Michelle M; Chaikin, Paul M; Bishop, Kyle J M
2018-05-11
A homogeneous magnetic field can exert no net force on a colloidal particle. However, by coupling the particle's orientation to its position on a curved interface, even static homogeneous fields can be used to drive rapid particle motions. Here, we demonstrate this effect using magnetic Janus particles with amphiphilic surface chemistry adsorbed at the spherical interface of a water drop in decane. Application of a static homogeneous field drives particle motion to the drop equator where the particle's magnetic moment can align parallel to the field. As explained quantitatively by a simple model, the effective magnetic force on the particle scales linearly with the curvature of the interface. For particles adsorbed on small droplets such as those found in emulsions, these magneto-capillary forces can far exceed those due to magnetic field gradients in both magnitude and range. This mechanism may be useful in creating highly responsive emulsions and foams stabilized by magnetic particles.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Homogeneous dielectric barrier discharges in atmospheric air and its influencing factor
NASA Astrophysics Data System (ADS)
Ran, Junxia; Li, Caixia; Ma, Dong; Luo, Haiyun; Li, Xiaowei
2018-03-01
The stable homogeneous dielectric barrier discharge (DBD) is obtained in atmospheric 2-3 mm air gap. It is generated using center frequency 1 kHz high voltage power supply between two plane parallel electrodes with specific alumina ceramic plates as the dielectric barriers. The discharge characteristics are studied by a measurement of its electrical discharge parameters and observation of its light emission phenomena. The results show that a large single current pulse of about 200 μs duration appearing in each voltage pulse, and its light emission is radially homogeneous and covers the entire surface of the two electrodes. The homogeneous discharge generated is a Townsend discharge during discharge. The influences of applied barrier, its thickness, and surface roughness on the transition of discharge modes are studied. The results show that it is difficult to produce a homogeneous discharge using smooth plates or alumina plate surface roughness Ra < 100 nm even at a 1 mm air gap. If the alumina plate is too thin, the discharge also transits to filamentary discharge. If it is too thick, the discharge is too weak to observe. With the increase of air gap distance and applied voltage, the discharge can also transit from a homogeneous mode to a filamentary mode. In order to generate stable and homogeneous DBD at a larger air gap, proper dielectric material, dielectric thickness, and dielectric surface roughness should be used, and proper applied voltage amplitude and frequency should also be used.
Permanent magnet system to guide superparamagnetic particles
NASA Astrophysics Data System (ADS)
Baun, Olga; Blümler, Peter
2017-10-01
A new concept of using permanent magnet systems for guiding superparamagnetic nano-particles on arbitrary trajectories over a large volume is proposed. The basic idea is to use one magnet system which provides a strong, homogeneous, dipolar magnetic field to magnetize and orient the particles, and a second constantly graded, quadrupolar field, superimposed on the first, to generate a force on the oriented particles. In this configuration the motion of the particles is driven predominantly by the component of the gradient field which is parallel to the direction of the homogeneous field. As a result, particles are guided with constant force and in a single direction over the entire volume. The direction is simply adjusted by varying the angle between quadrupole and dipole. Since a single gradient is impossible due to Gauß' law, the other gradient component of the quadrupole determines the angular deviation of the force. However, the latter can be neglected if the homogeneous field is stronger than the local contribution of the quadrupole field. A possible realization of this idea is a coaxial arrangement of two Halbach cylinders. A dipole to evenly magnetize and orient the particles, and a quadrupole to generate the force. The local force was calculated analytically for this particular geometry and the directional limits were analyzed and discussed. A simple prototype was constructed to demonstrate the principle in two dimensions on several nano-particles of different size, which were moved along a rough square by manual adjustment of the force angle. The observed velocities of superparamagnetic particles in this prototype were always several orders of magnitude higher than the theoretically expected value. This discrepancy is attributed to the observed formation of long particle chains as a result of their polarization by the homogeneous field. The magnetic moment of such a chain is then the combination of that of its constituents, while its hydrodynamic radius stays low. A complete system will consist of another quadrupole (third cylinder) to additionally enable scaling of the gradient/force strength by another rotation. In this configuration the device could then also be used as a simple MRI machine to image the particles between movement intervals. Finally, a concept is proposed by which superparamagnetic particles can be guided in three-dimensional space.
Stress analysis in oral obturator prostheses, part II: photoelastic imaging
NASA Astrophysics Data System (ADS)
Pesqueira, Aldiéris Alves; Goiato, Marcelo Coelho; da Silva, Emily Vivianne Freitas; Haddad, Marcela Filié; Moreno, Amália; Zahoui, Abbas; dos Santos, Daniela Micheline
2014-06-01
In part I of the study, two attachment systems [O-ring; bar-clip (BC)] were used, and the system with three individualized O-rings provided the lowest stress on the implants and the support tissues. Therefore, the aim of this study was to assess the stress distribution, through the photoelastic method, on implant-retained palatal obturator prostheses associated with different attachment systems: BOC-splinted implants with a bar connected to two centrally placed O-rings, and BOD-splinted implants with a BC connected to two distally placed O-rings (cantilever). One photoelastic model of the maxilla with oral-sinus-nasal communication with three parallel implants was fabricated. Afterward, two implant-retained palatal obturator prostheses with the two attachment systems described above were constructed. Each assembly was positioned in a circular polariscope and a 100-N axial load was applied in three different regions with implants by using a universal testing machine. The results were obtained through photograph record analysis of stress. The BOD system exhibited the highest stress concentration, followed by the BOC system. The O-ring, centrally placed on the bar, allows higher mobility of the prostheses and homogeneously distributes the stress to the region of the alveolar ridge and implants. It can be concluded that the use of implants with O-rings, isolated or connected with a bar, to rehabilitate maxillectomized patients allows higher prosthesis mobility and homogeneously distributes the stress to the alveolar ridge region, which may result in greater chewing stress distribution to implants and bone tissue. The clinical implication of the augmented bone support loss after maxillectomy is the increase of stress in the attachment systems and, consequently, a higher tendency for displacement of the prosthesis.
Xiao, Rong; Anderson, Stephen; Aramini, James; Belote, Rachel; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John K.; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Jiang, Mei; Kornhaber, Gregory J.; Lee, Dong Yup; Locke, Jessica Y.; Ma, Li-Chung; Maglaqui, Melissa; Mao, Lei; Mitra, Saheli; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Sharma, Seema; Shastry, Ritu; Swapna, G.V.T.; Tong, Saichu N.; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.; Acton, Thomas B.
2014-01-01
We describe the core Protein Production Platform of the Northeast Structural Genomics Consortium (NESG) and outline the strategies used for producing high-quality protein samples. The platform is centered on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems. The 6X-His tag allows for similar purification procedures for most targets and implementation of high-throughput (HTP) parallel methods. In most cases, the 6X-His-tagged proteins are sufficiently purified (> 97% homogeneity) using a HTP two-step purification protocol for most structural studies. Using this platform, the open reading frames of over 16,000 different targeted proteins (or domains) have been cloned as > 26,000 constructs. Over the past nine years, more than 16,000 of these expressed protein, and more than 4,400 proteins (or domains) have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html). Using these samples, the NESG has deposited more than 900 new protein structures to the Protein Data Bank (PDB). The methods described here are effective in producing eukaryotic and prokaryotic protein samples in E. coli. This paper summarizes some of the updates made to the protein production pipeline in the last five years, corresponding to phase 2 of the NIGMS Protein Structure Initiative (PSI-2) project. The NESG Protein Production Platform is suitable for implementation in a large individual laboratory or by a small group of collaborating investigators. These advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are of broad value to the structural biology, functional proteomics, and structural genomics communities. PMID:20688167
Lin, Hong; Magrane, Jordi; Clark, Elisia M; Halawani, Sarah M; Warren, Nathan; Rattelle, Amy; Lynch, David R
2017-12-19
Friedreich ataxia (FRDA) is an autosomal recessive neurodegenerative disorder with progressive ataxia that affects both the peripheral and central nervous system (CNS). While later CNS neuropathology involves loss of large principal neurons and glutamatergic and GABAergic synaptic terminals in the cerebellar dentate nucleus, early pathological changes in FRDA cerebellum remain largely uncharacterized. Here, we report early cerebellar VGLUT1 (SLC17A7)-specific parallel fiber (PF) synaptic deficits and dysregulated cerebellar circuit in the frataxin knock-in/knockout (KIKO) FRDA mouse model. At asymptomatic ages, VGLUT1 levels in cerebellar homogenates are significantly decreased, whereas VGLUT2 (SLC17A6) levels are significantly increased, in KIKO mice compared with age-matched controls. Additionally, GAD65 (GAD2) levels are significantly increased, while GAD67 (GAD1) levels remain unaltered. This suggests early VGLUT1-specific synaptic input deficits, and dysregulation of VGLUT2 and GAD65 synaptic inputs, in the cerebellum of asymptomatic KIKO mice. Immunohistochemistry and electron microscopy further show specific reductions of VGLUT1-containing PF presynaptic terminals in the cerebellar molecular layer, demonstrating PF synaptic input deficiency in asymptomatic and symptomatic KIKO mice. Moreover, the parvalbumin levels in cerebellar homogenates and Purkinje neurons are significantly reduced, but preserved in other interneurons of the cerebellar molecular layer, suggesting specific parvalbumin dysregulation in Purkinje neurons of these mice. Furthermore, a moderate loss of large principal neurons is observed in the dentate nucleus of asymptomatic KIKO mice, mimicking that of FRDA patients. Our findings thus identify early VGLUT1-specific PF synaptic input deficits and dysregulated cerebellar circuit as potential mediators of cerebellar dysfunction in KIKO mice, reflecting developmental features of FRDA in this mouse model. © 2017. Published by The Company of Biologists Ltd.
Wu, Xiaoping; Adriany, Gregor; Ugurbil, Kamil; Van de Moortele, Pierre-Francois
2013-01-01
Successful implementation of homogeneous slice-selective RF excitation in the human brain at 9.4T using 16-channel parallel transmission (pTX) is demonstrated. A novel three-step pulse design method incorporating fast real-time measurement of eddy current induced B0 variations as well as correction of resulting phase errors during excitation is described. To demonstrate the utility of the proposed method, phantom and in-vivo experiments targeting a uniform excitation in an axial slice were conducted using two-spoke pTX pulses. Even with the pre-emphasis activated, eddy current induced B0 variations with peak-to-peak values greater than 4 kHz were observed on our system during the rapid switches of slice selective gradients. This large B0 variation, when not corrected, resulted in drastically degraded excitation fidelity with the coefficient of variation (CV) of the flip angle calculated for the region of interest being large (∼12% in the phantom and ∼35% in the brain). By comparison, excitation fidelity was effectively restored, and satisfactory flip angle uniformity was achieved when using the proposed method, with the CV value reduced to ∼3% in the phantom and ∼8% in the brain. Additionally, experimental results were in good agreement with the numerical predictions obtained from Bloch simulations. Slice-selective flip angle homogenization in the human brain at 9.4T using 16-channel 3D spoke pTX pulses is achievable despite of large eddy current induced excitation phase errors; correcting for the latter was critical in this success. PMID:24205098
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Li, Ye; Pang, Yong; Vigneron, Daniel; Glenn, Orit; Xu, Duan; Zhang, Xiaoliang
2011-01-01
Fetal MRI on 1.5T clinical scanner has been increasingly becoming a powerful imaging tool for studying fetal brain abnormalities in vivo. Due to limited availability of dedicated fetal phased arrays, commercial torso or cardiac phased arrays are routinely used for fetal scans, which are unable to provide optimized SNR and parallel imaging performance with a small number coil elements, and insufficient coverage and filling factor. This poses a demand for the investigation and development of dedicated and efficient radiofrequency (RF) hardware to improve fetal imaging. In this work, an investigational approach to simulate the performance of multichannel flexible phased arrays is proposed to find a better solution to fetal MR imaging. A 32 channel fetal array is presented to increase coil sensitivity, coverage and parallel imaging performance. The electromagnetic field distribution of each element of the fetal array is numerically simulated by using finite-difference time-domain (FDTD) method. The array performance, including B1 coverage, parallel reconstructed images and artifact power, is then theoretically calculated and compared with the torso array. Study results show that the proposed array is capable of increasing B1 field strength as well as sensitivity homogeneity in the entire area of uterus. This would ensure high quality imaging regardless of the location of the fetus in the uterus. In addition, the paralleling imaging performance of the proposed fetal array is validated by using artifact power comparison with torso array. These results demonstrate the feasibility of the 32 channel flexible array for fetal MR imaging at 1.5T. PMID:22408747
Vortex phase diagram of the layered superconductor Cu0.03TaS2 for H \\parallel c
NASA Astrophysics Data System (ADS)
Zhu, X. D.; Lu, J. C.; Sun, Y. P.; Pi, L.; Qu, Z.; Ling, L. S.; Yang, Z. R.; Zhang, Y. H.
2010-12-01
The magnetization and anisotropic electrical transport properties have been measured in high quality Cu0.03TaS2 single crystals. A pronounced peak effect has been observed, indicating that high quality and homogeneity are vital to the peak effect. A kink has been observed in the magnetic field, H, dependence of the in-plane resistivity ρab for H\\parallel c , which corresponds to a transition from activated to diffusive behavior of the vortex liquid phase. In the diffusive regime of the vortex liquid phase, the in-plane resistivity ρab is proportional to H0.3, which does not follow the Bardeen-Stephen law for free flux flow. Finally, a simplified vortex phase diagram of Cu0.03TaS2 for H \\parallel c is given.
Radiative transfer in spherical shell atmospheres. II - Asymmetric phase functions
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Adams, C. N.
1978-01-01
This paper investigates the effects of sphericity on the radiation reflected from a planet with a homogeneous conservative-scattering atmosphere of optical thicknesses of 0.25 and 1.0. A Henyey-Greenstein phase function with asymmetry factors of 0.5 and 0.7 was considered. Significant differences were found when these results were compared with the plane-parallel calculations. Also, large violations of the reciprocity theorem, which is only true for plane-parallel calculations, were noted. Results are presented for the radiance versus height distributions as a function of planetary phase angle. These results will be useful to researchers in the field of remote sensing and planetary spectroscopy.
Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Dynamic and Thermal Turbulent Time Scale Modelling for Homogeneous Shear Flows
NASA Technical Reports Server (NTRS)
Schwab, John R.; Lakshminarayana, Budugur
1994-01-01
A new turbulence model, based upon dynamic and thermal turbulent time scale transport equations, is developed and applied to homogeneous shear flows with constant velocity and temperature gradients. The new model comprises transport equations for k, the turbulent kinetic energy; tau, the dynamic time scale; k(sub theta), the fluctuating temperature variance; and tau(sub theta), the thermal time scale. It offers conceptually parallel modeling of the dynamic and thermal turbulence at the two equation level, and eliminates the customary prescription of an empirical turbulent Prandtl number, Pr(sub t), thus permitting a more generalized prediction capability for turbulent heat transfer in complex flows and geometries. The new model also incorporates constitutive relations, based upon invariant theory, that allow the effects of nonequilibrium to modify the primary coefficients for the turbulent shear stress and heat flux. Predictions of the new model, along with those from two other similar models, are compared with experimental data for decaying homogeneous dynamic and thermal turbulence, homogeneous turbulence with constant temperature gradient, and homogeneous turbulence with constant temperature gradient and constant velocity gradient. The new model offers improvement in agreement with the data for most cases considered in this work, although it was no better than the other models for several cases where all the models performed poorly.
Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy
NASA Astrophysics Data System (ADS)
Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.
2007-04-01
Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.
Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H
2009-09-15
We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.
Wu, Xiaoping; Zhang, Xiaotong; Tian, Jinfeng; Schmitter, Sebastian; Hanna, Brian; Strupp, John; Pfeuffer, Josef; Hamm, Michael; Wang, Dingxin; Nistler, Juergen; He, Bin; Vaughan, J. Thomas; Ugurbil, Kamil; Van de Moortele, Pierre-Francois
2015-01-01
The performance of multichannel transmit coil layouts and parallel transmission (pTx) radiofrequency (RF) pulse design was evaluated with respect to transmit B1 (B1+) homogeneity and Specific Absorption Rate (SAR) at 3 Tesla for a whole body coil. Five specific coils were modeled and compared: a 32-rung birdcage body coil (driven either in a fixed quadrature mode or a two-channel transmit mode), two single-ring stripline arrays (with either 8 or 16 elements), and two multi-ring stripline arrays (with 2 or 3 identical rings, stacked in the z-axis and each comprising eight azimuthally distributed elements). Three anatomical targets were considered, each defined by a 3D volume representative of a meaningful region of interest (ROI) in routine clinical applications. For a given anatomical target, global or local SAR controlled pTx pulses were designed to homogenize RF excitation within the ROI. At the B1+ homogeneity achieved by the quadrature driven birdcage design, pTx pulses with multichannel transmit coils achieved up to ~8 fold reduction in local and global SAR. When used for imaging head and cervical spine or imaging thoracic spine, the double-ring array outperformed all coils including the single-ring arrays. While the advantage of the double-ring array became much less pronounced for pelvic imaging with a substantially larger ROI, the pTx approach still provided significant gains over the quadrature birdcage coil. For all design scenarios, using the 3-ring array did not necessarily improve the RF performance. Our results suggest that pTx pulses with multichannel transmit coils can reduce local and global SAR substantially for body coils while attaining improved B1+ homogeneity, particularly for a “z-stacked” double-ring design with coil elements arranged on two transaxial rings. PMID:26332290
Distributed computing for membrane-based modeling of action potential propagation.
Porras, D; Rogers, J M; Smith, W M; Pollard, A E
2000-08-01
Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.
A gyrofluid description of Alfvenic turbulence and its parallel electric field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bian, N. H.; Kontar, E. P.
2010-06-15
Anisotropic Alfvenic fluctuations with k{sub ||}/k{sub perpendicular}<<1 remain at frequencies much smaller than the ion cyclotron frequency in the presence of a strong background magnetic field. Based on the simplest truncation of the electromagnetic gyrofluid equations in a homogeneous plasma, a model for the energy cascade produced by Alfvenic turbulence is constructed, which smoothly connects the large magnetohydrodynamics scales and the small 'kinetic' scales. Scaling relations are obtained for the electromagnetic fluctuations, as a function of k{sub perpendicular} and k{sub ||}. Moreover, a particular attention is paid to the spectral structure of the parallel electric field which is produced bymore » Alfvenic turbulence. The reason is the potential implication of this parallel electric field in turbulent acceleration and transport of particles. For electromagnetic turbulence, this issue was raised some time ago in Hasegawa and Mima [J. Geophys. Res. 83, 1117 (1978)].« less
Vortex phase diagram of the layered superconductor Cu0.03TaS2 for H is parallel to c.
Zhu, X D; Lu, J C; Sun, Y P; Pi, L; Qu, Z; Ling, L S; Yang, Z R; Zhang, Y H
2010-12-22
The magnetization and anisotropic electrical transport properties have been measured in high quality Cu(0.03)TaS(2) single crystals. A pronounced peak effect has been observed, indicating that high quality and homogeneity are vital to the peak effect. A kink has been observed in the magnetic field, H, dependence of the in-plane resistivity ρ(ab) for H is parallel to c, which corresponds to a transition from activated to diffusive behavior of the vortex liquid phase. In the diffusive regime of the vortex liquid phase, the in-plane resistivity ρ(ab) is proportional to H(0.3), which does not follow the Bardeen-Stephen law for free flux flow. Finally, a simplified vortex phase diagram of Cu(0.03)TaS(2) for H is parallel to c is given.
Fox, Don T.; Guo, Luanjing; Fujita, Yoshiko; ...
2015-12-17
Formation of mineral precipitates in the mixing interface between two reactant solutions flowing in parallel in porous media is governed by reactant mixing by diffusion and dispersion and is coupled to changes in porosity/permeability due to precipitation. The spatial and temporal distribution of mixing-dependent precipitation of barium sulfate in porous media was investigated with side-by-side injection of barium chloride and sodium sulfate solutions in thin rectangular flow cells packed with quartz sand. The results for homogeneous sand beds were compared to beds with higher or lower permeability inclusions positioned in the path of the mixing zone. In the homogeneous andmore » high permeability inclusion experiments, BaSO 4 precipitate (barite) formed in a narrow deposit along the length and in the center of the solution–solution mixing zone even though dispersion was enhanced within, and downstream of, the high permeability inclusion. In the low permeability inclusion experiment, the deflected BaSO 4 precipitation zone broadened around one side and downstream of the inclusion and was observed to migrate laterally toward the sulfate solution. A continuum-scale fully coupled reactive transport model that simultaneously solves the nonlinear governing equations for fluid flow, transport of reactants and geochemical reactions was used to simulate the experiments and provide insight into mechanisms underlying the experimental observations. Lastly, migration of the precipitation zone in the low permeability inclusion experiment could be explained by the coupling effects among fluid flow, reactant transport and localized mineral precipitation reaction.« less
Improving homogeneity by dynamic speed limit systems.
van Nes, Nicole; Brandenburg, Stefan; Twisk, Divera
2010-05-01
Homogeneity of driving speeds is an important variable in determining road safety; more homogeneous driving speeds increase road safety. This study investigates the effect of introducing dynamic speed limit systems on homogeneity of driving speeds. A total of 46 subjects twice drove a route along 12 road sections in a driving simulator. The speed limit system (static-dynamic), the sophistication of the dynamic speed limit system (basic roadside, advanced roadside, and advanced in-car) and the situational condition (dangerous-non-dangerous) were varied. The homogeneity of driving speed, the rated credibility of the posted speed limit and the acceptance of the different dynamic speed limit systems were assessed. The results show that the homogeneity of individual speeds, defined as the variation in driving speed for an individual subject along a particular road section, was higher with the dynamic speed limit system than with the static speed limit system. The more sophisticated dynamic speed limit system tested within this study led to higher homogeneity than the less sophisticated systems. The acceptance of the dynamic speed limit systems used in this study was positive, they were perceived as quite useful and rather satisfactory. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
NMR, MRI, and spectroscopic MRI in inhomogeneous fields
Demas, Vasiliki; Pines, Alexander; Martin, Rachel W; Franck, John; Reimer, Jeffrey A
2013-12-24
A method for locally creating effectively homogeneous or "clean" magnetic field gradients (of high uniformity) for imaging (with NMR, MRI, or spectroscopic MRI) both in in-situ and ex-situ systems with high degrees of inhomogeneous field strength. THe method of imaging comprises: a) providing a functional approximation of an inhomogeneous static magnetic field strength B.sub.0({right arrow over (r)}) at a spatial position {right arrow over (r)}; b) providing a temporal functional approximation of {right arrow over (G)}.sub.shim(t) with i basis functions and j variables for each basis function, resulting in v.sub.ij variables; c) providing a measured value .OMEGA., which is an temporally accumulated dephasing due to the inhomogeneities of B.sub.0({right arrow over(r)}); and d) minimizing a difference in the local dephasing angle .phi.({right arrow over (r)},t)=.gamma..intg..sub.0.sup.t{square root over (|{right arrow over (B)}.sub.1({right arrow over (r)},t')|.sup.2+({right arrow over (r)}{right arrow over (G)}.sub.shimG.sub.shim(t')+.parallel.{right arrow over (B)}.sub.0({right arrow over (r)}).parallel..DELTA..omega.({right arrow over (r)},t'/.gamma/).sup.2)}dt'-.OMEGA. by varying the v.sub.ij variables to form a set of minimized v.sub.ij variables. The method requires calibration of the static fields prior to minimization, but may thereafter be implemented without such calibration, may be used in open or closed systems, and potentially portable systems.
NASA Astrophysics Data System (ADS)
Arteaga, Santiago Egido
1998-12-01
The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.
NASA Technical Reports Server (NTRS)
Hermance, J. F. (Principal Investigator)
1981-01-01
An algorithm was developed to address the problem of electromagnetic coupling of ionospheric current systems to both a homogeneous Earth having finite conductivity, and to an Earth having gross lateral variations in its conductivity structure, e.g., the ocean-land interface. Typical results from the model simulation for ionospheric currents flowing parallel to a representative geologic discontinuity are shown. Although the total magnetic field component at the satellite altitude is an order of magnitude smaller than at the Earth's surface (because of cancellation effects from the source current), the anomalous behavior of the satellite observations as the vehicle passes over the geologic contact is relatively more important pronounced. The results discriminate among gross lithospheric structures because of difference in electrical conductivity.
Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans
2008-04-28
Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.
Homogeneous alignment of nematic liquid crystals by ion beam etched surfaces
NASA Technical Reports Server (NTRS)
Wintucky, E. G.; Mahmood, R.; Johnson, D. L.
1979-01-01
A wide range of ion beam etch parameters capable of producing uniform homogeneous alignment of nematic liquid crystals on SiO2 films are discussed. The alignment surfaces were generated by obliquely incident (angles of 5 to 25 deg) argon ions with energies in the range of 0.5 to 2.0 KeV, ion current densities of 0.1 to 0.6 mA sq cm and etch times of 1 to 9 min. A smaller range of ion beam parameters (2.0 KeV, 0.2 mA sq cm, 5 to 10 deg and 1 to 5 min.) were also investigated with ZrO2 films and found suitable for homogeneous alignment. Extinction ratios were very high (1000), twist angles were small ( or = 3 deg) and tilt-bias angles very small ( or = 1 deg). Preliminary scanning electron microscopy results indicate a parallel oriented surface structure on the ion beam etched surfaces which may determine alignment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, L.; Bie, B. X.; Li, Q. H.
2017-06-01
In situ synchrotron x-ray imaging and diffraction are used to investigate deformation of a rolled magnesium alloy under uniaxial compression at room and elevated temperatures along two different directions. The loading axis (LA) is either perpendicular or parallel to the normal direction, and these two cases are referred to as LA⊥ and LAk loading, respectively. Multiscale measurements including stressestrain curves (macroscale), strain fields (mesoscale), and diffraction patterns (microscale) are obtained simultaneously. Due to initial texture, f1012g extension twinning is predominant in the LA⊥ loading, while dislocation motion prevails in the LAk loading. With increasing temperature, fewer f1012g extension twins aremore » activated in the LA⊥ samples, giving rise to reduced strain homogenization, while pyramidal slip becomes readily activated, leading to more homogeneous deformation for the LAk loading. The difference in the strain hardening rates is attributed to that in strain field homogenization for these two loading directions« less
Characteristics of manipulator for industrial robot with three rotational pairs having parallel axes
NASA Astrophysics Data System (ADS)
Poteyev, M. I.
1986-01-01
The dynamics of a manipulator with three rotatinal kinematic pairs having parallel axes are analyzed, for application in an industrial robot. The system of Lagrange equations of the second kind, describing the motion of such a mechanism in terms of kinetic energy in generalized coordinates, is reduced to equations of motion in terms of Newton's laws. These are useful not only for either determining the moments of force couples which will produce a prescribed motion or, conversely determining the motion which given force couples will produce but also for solving optimization problems under constraints in both cases and for estimating dynamic errors. As a specific example, a manipulator with all three axes of vertical rotation is considered. The performance of this manipulator, namely the parameters of its motion as functions of time, is compared with that of a manipulator having one rotational and two translational kinematic pairs. Computer aided simulation of their motion on the basis of ideal models, with all three links represented by identical homogeneous bars, has yielded velocity time diagrams which indicate that the manipulator with three rotational pairs is 4.5 times faster.
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
Large eddy simulation of hydrodynamic cavitation
NASA Astrophysics Data System (ADS)
Bhatt, Mrugank; Mahesh, Krishnan
2017-11-01
Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.
NASA Astrophysics Data System (ADS)
Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas
2016-04-01
One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are therefore suitable for the application on supercomputers (such as SuperMUC). Finally, (if time or space) some in-detail problems are shown that occur when dealing with high degree spherical harmonic base functions (mostly due to instabilities of Legendre polynomials), introducing also an appropriate solution for each.
ERIC Educational Resources Information Center
Gill, Saran Kaur
2007-01-01
Malaysia experienced a major shift in language policy in 2003 for the subjects of science and maths. This meant a change in the language of education for both national and national-type schools. For national schools, this resulted in a shift from Bahasa Malaysia, the national language to English. Parallel with this, to ensure homogeneity of impact…
Anomalous refraction of a low divergence monochromatic light beam in a transparent slab.
Lequime, Michel; Amra, Claude
2018-04-01
An exact formulation for the propagation of a monochromatic wave packet impinging on a transparent, homogeneous, isotropic, and parallel slab at oblique incidence is given. Approximate formulas are derived for low divergence light beams. These formulas show the presence of anomalous refraction phenomena at any slab thickness, including negative refraction and flat lensing effects, induced by reflection at the rear face.
A Generic analytical solution for modelling pumping tests in wells intersecting fractures
NASA Astrophysics Data System (ADS)
Dewandel, Benoît; Lanini, Sandra; Lachassagne, Patrick; Maréchal, Jean-Christophe
2018-04-01
The behaviour of transient flow due to pumping in fractured rocks has been studied for at least the past 80 years. Analytical solutions were proposed for solving the issue of a well intersecting and pumping from one vertical, horizontal or inclined fracture in homogeneous aquifers, but their domain of application-even if covering various fracture geometries-was restricted to isotropic or anisotropic aquifers, whose potential boundaries had to be parallel or orthogonal to the fracture direction. The issue thus remains unsolved for many field cases. For example, a well intersecting and pumping a fracture in a multilayer or a dual-porosity aquifer, where intersected fractures are not necessarily parallel or orthogonal to aquifer boundaries, where several fractures with various orientations intersect the well, or the effect of pumping not only in fractures, but also in the aquifer through the screened interval of the well. Using a mathematical demonstration, we show that integrating the well-known Theis analytical solution (Theis, 1935) along the fracture axis is identical to the equally well-known analytical solution of Gringarten et al. (1974) for a uniform-flux fracture fully penetrating a homogeneous aquifer. This result implies that any existing line- or point-source solution can be used for implementing one or more discrete fractures that are intersected by the well. Several theoretical examples are presented and discussed: a single vertical fracture in a dual-porosity aquifer or in a multi-layer system (with a partially intersecting fracture); one and two inclined fractures in a leaky-aquifer system with pumping either only from the fracture(s), or also from the aquifer between fracture(s) in the screened interval of the well. For the cases with several pumping sources, analytical solutions of flowrate contribution from each individual source (fractures and well) are presented, and the drawdown behaviour according to the length of the pumped screened interval of the well is discussed. Other advantages of this proposed generic analytical solution are also given. The application of this solution to field data should provide additional field information on fracture geometry, as well as identifying the connectivity between the pumped fractures and other aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, N; Takao, S; Matsuura, T
2015-06-15
Purpose: To realize real-time-image gated proton beam therapy (RGPT) for treating mobile tumors. Methods: The rotating gantry of spot scanning proton beam therapy has been designed to equip two x-ray fluoroscopy devices that enable real-time imaging of the internal fiducial markers during respiration. Three-dimensional position of the fiducial marker located near the tumor can be calculated from the fluoroscopic images obtained from orthogonal directions and therapeutic beam is gated only when the fiducial marker is within the predefined gating window. Image acquisition rate can be selected from discrete value ranging from 0.1 Hz to 30 Hz. In order to confirmmore » the effectiveness of RGPT and apply it clinically, clinical commissioning was conducted. Commissioning tests were categorized to main three parts including geometric accuracy, temporal accuracy and dosimetric evaluation. Results: Developed real-time imaging function has been installed and its basic performances have been confirmed. In the evaluation of geometric accuracy, coincidence of three-dimensional treatment room coordinate system and imaging coordinate system was confirmed to be less than 1 mm. Fiducial markers (gold sphere and coil) were able to be tracked in simulated clinical condition using an anthropomorphic chest phantom. In the evaluation of temporal accuracy, latency from image acquisition to gate on/off signal was about 60 msec in typical case. In dosimetric evaluation, treatment beam characteristics including beam irradiation position and dose output were stable in gated irradiation. Homogeneity indices to the mobile target were 0.99 (static), 0.89 (w/o gating, motion is parallel to direction of scan), 0.75 (w/o gating, perpendicular), 0.98 (w/ gating, parallel) and 0.93 (w/ gating, perpendicular). Dose homogeneity to the mobile target can be maintained in RGPT. Conclusion: Real-time imaging function utilizing x-ray fluoroscopy has been developed and commissioned successfully in order to realize RGPT. Funding Support: This research was partially supported by Japan Society for the Promotion of Science (JSPS) through the FIRST Program. Conflict of Interest: Prof. Shirato has research fund from Hitachi Ltd, Mitsubishi Heavy Industries Ltd and Shimadzu Corporation.« less
NASA Astrophysics Data System (ADS)
Davis, Jeffrey Michael
The recent focus on microfluidic devices has generated substantial interest in small-scale transport phenomena. Because the surface to volume ratio scales inversely with the characteristic length scale, surface forces dominate in microscale systems. In particular, these forces can be manipulated to regulate the motion of thin liquid films. The dynamics and stability of thermocapillary spreading films are theoretically investigated in this dissertation for flow on homogeneous and chemically or topographically patterned substrates. Because the governing equations for spreading films driven by other forces are analogous, the approach and results are valid for general lubrication flows. Experiments have shown that films spreading on homogeneous substrates can undergo a flow transition from a uniform front at the advancing solid-liquid-vapor contact line to an array of parallel rivulets. This instability is investigated via a non-modal, transient analysis because the relevant linearized disturbance operators for spatially inhomogeneous thin films are nonnormal. Stability results for three different contact line models are compared. This investigation of thermocapillary driven spreading is also pursued in the context of characterizing a novel, open-architecture microfluidic device based on flow confinement to completely wetting microstripes through chemical micropatterning of the substrate. The resulting lateral curvature of the fluid significantly influences the dynamics of the liquid. Applied to the dip coating of these patterned substrates, hydrodynamic scaling arguments are used to derive a replacement for the classical Landau-Levich result for homogeneous substrates. Thermocapillary flow along wetting microstripes is then characterized. The lateral curvature modifies the expected spreading velocity and film profile and also suppresses the capillary ridge and instability observed at the advancing contact line on homogeneous surfaces. In addition, a lubrication-based model is derived to quantify the significant effects of lateral film curvature and fluid confinement on the transverse diffusive broadening in two microstreams merging at a ⋎ -junction. Finally, the analysis is extended to lubrication flow over chemically uniform but topographically patterned substrates. A transient analysis is employed to determine the evolution of disturbances to the capillary ridges induced by the substrate topography.
Wagler, Patrick F; Tangen, Uwe; Maeke, Thomas; McCaskill, John S
2012-07-01
The topic addressed is that of combining self-constructing chemical systems with electronic computation to form unconventional embedded computation systems performing complex nano-scale chemical tasks autonomously. The hybrid route to complex programmable chemistry, and ultimately to artificial cells based on novel chemistry, requires a solution of the two-way massively parallel coupling problem between digital electronics and chemical systems. We present a chemical microprocessor technology and show how it can provide a generic programmable platform for complex molecular processing tasks in Field Programmable Chemistry, including steps towards the grand challenge of constructing the first electronic chemical cells. Field programmable chemistry employs a massively parallel field of electrodes, under the control of latched voltages, which are used to modulate chemical activity. We implement such a field programmable chemistry which links to chemistry in rather generic, two-phase microfluidic channel networks that are separated into weakly coupled domains. Electric fields, produced by the high-density array of electrodes embedded in the channel floors, are used to control the transport of chemicals across the hydrodynamic barriers separating domains. In the absence of electric fields, separate microfluidic domains are essentially independent with only slow diffusional interchange of chemicals. Electronic chemical cells, based on chemical microprocessors, exploit a spatially resolved sandwich structure in which the electronic and chemical systems are locally coupled through homogeneous fine-grained actuation and sensor networks and play symmetric and complementary roles. We describe how these systems are fabricated, experimentally test their basic functionality, simulate their potential (e.g. for feed forward digital electrophoretic (FFDE) separation) and outline the application to building electronic chemical cells. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Heterogeneous nucleation from a supercooled ionic liquid on a carbon surface
NASA Astrophysics Data System (ADS)
He, Xiaoxia; Shen, Yan; Hung, Francisco R.; Santiso, Erik E.
2016-12-01
Classical molecular dynamics simulations were used to study the nucleation of the crystal phase of the ionic liquid [dmim+][Cl-] from its supercooled liquid phase, both in the bulk and in contact with a graphitic surface of D = 3 nm. By combining the string method in collective variables [Maragliano et al., J. Chem. Phys. 125, 024106 (2006)], with Markovian milestoning with Voronoi tessellations [Maragliano et al., J. Chem. Theory Comput. 5, 2589-2594 (2009)] and order parameters for molecular crystals [Santiso and Trout, J. Chem. Phys. 134, 064109 (2011)], we computed minimum free energy paths, the approximate size of the critical nucleus, the free energy barrier, and the rates involved in these nucleation processes. For homogeneous nucleation, the subcooled liquid phase has to overcome a free energy barrier of ˜85 kcal/mol to form a critical nucleus of size ˜3.6 nm, which then grows into the monoclinic crystal phase. This free energy barrier becomes about 42% smaller (˜49 kcal/mol) when the subcooled liquid phase is in contact with a graphitic disk, and the critical nucleus formed is about 17% smaller (˜3.0 nm) than the one observed for homogeneous nucleation. The crystal formed in the heterogeneous nucleation scenario has a structure that is similar to that of the bulk crystal, with the exception of the layers of ions next to the graphene surface, which have larger local density and the cations lie with their imidazolium rings parallel to the graphitic surface. The critical nucleus forms near the graphene surface separated only by these layers of ions. The heterogeneous nucleation rate (˜4.8 × 1011 cm-3 s-1) is about one order of magnitude faster than the homogeneous rate (˜6.6 × 1010 cm-3 s-1). The computed free energy barriers and nucleation rates are in reasonable agreement with experimental and simulation values obtained for the homogeneous and heterogeneous nucleation of other systems (ice, urea, Lennard-Jones spheres, and oxide glasses).
Heterogeneous nucleation from a supercooled ionic liquid on a carbon surface.
He, Xiaoxia; Shen, Yan; Hung, Francisco R; Santiso, Erik E
2016-12-07
Classical molecular dynamics simulations were used to study the nucleation of the crystal phase of the ionic liquid [dmim + ][Cl - ] from its supercooled liquid phase, both in the bulk and in contact with a graphitic surface of D = 3 nm. By combining the string method in collective variables [Maragliano et al., J. Chem. Phys. 125, 024106 (2006)], with Markovian milestoning with Voronoi tessellations [Maragliano et al., J. Chem. Theory Comput. 5, 2589-2594 (2009)] and order parameters for molecular crystals [Santiso and Trout, J. Chem. Phys. 134, 064109 (2011)], we computed minimum free energy paths, the approximate size of the critical nucleus, the free energy barrier, and the rates involved in these nucleation processes. For homogeneous nucleation, the subcooled liquid phase has to overcome a free energy barrier of ∼85 kcal/mol to form a critical nucleus of size ∼3.6 nm, which then grows into the monoclinic crystal phase. This free energy barrier becomes about 42% smaller (∼49 kcal/mol) when the subcooled liquid phase is in contact with a graphitic disk, and the critical nucleus formed is about 17% smaller (∼3.0 nm) than the one observed for homogeneous nucleation. The crystal formed in the heterogeneous nucleation scenario has a structure that is similar to that of the bulk crystal, with the exception of the layers of ions next to the graphene surface, which have larger local density and the cations lie with their imidazolium rings parallel to the graphitic surface. The critical nucleus forms near the graphene surface separated only by these layers of ions. The heterogeneous nucleation rate (∼4.8 × 10 11 cm -3 s -1 ) is about one order of magnitude faster than the homogeneous rate (∼6.6 × 10 10 cm -3 s -1 ). The computed free energy barriers and nucleation rates are in reasonable agreement with experimental and simulation values obtained for the homogeneous and heterogeneous nucleation of other systems (ice, urea, Lennard-Jones spheres, and oxide glasses).
Pegis, Michael L.; McKeown, Bradley A.; Kumar, Neeraj; ...
2016-10-28
Improvement of electrocatalysts for the oxygen reduction reaction (ORR) is critical for the advancement of fuel cell technologies. Herein, we report a series of eleven soluble iron porphyrin ORR electrocatalysts that possess turnover frequencies (TOFs) from 3 s -1 to an unprecedented 2.2 x 10 6 s -1. These TOFs correlate with the ORR overpotential, which can be changed by modulating the ancillary ligand, by varying the reaction conditions or by changing the catalyst’s protonation state. This is the first such correlation for homogeneous ORR electrocatalysis, and it demonstrates that the remarkably fast TOFs are a consequence of the highmore » overpotential. Computational studies indicate that the correlation is analogous to the volcano plot analysis developed for heterogeneous ORR materials. This unique parallel between homo- and heterogeneous ORR electrocatalysts allows a fundamental understanding of intrinsic barriers associated with the ORR, which can aid the design of new catalytic systems that operate at low overpotential. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Additional data is given in the Electronic Supporting Information.« less
Quality Designed Twin Wire Arc Spraying of Aluminum Bores
NASA Astrophysics Data System (ADS)
König, Johannes; Lahres, Michael; Methner, Oliver
2015-01-01
After 125 years of development in combustion engines, the attractiveness of these powerplants still gains a great deal of attention. The efficiency of engines has been increased continuously through numerous innovations during the last years. Especially in the field of motor engineering, consequent friction optimization leads to cost-effective fuel consumption advantages and a CO2 reduction. This is the motivation and adjusting lever of NANOSLIDE® from Mercedes-Benz. The twin wire arc-spraying process of the aluminum bore creates a thin, iron-carbon-alloyed coating which is surface-finished through honing. Due to the continuous development in engines, the coating strategies must be adapted in parallel to achieve a quality-conformed coating result. The most important factors to this end are the controlled indemnification of a minimal coating thickness and a homogeneous coating deposition of the complete bore. A specific system enables the measuring and adjusting of the part and the central plunging of the coating torch into the bore to achieve a homogeneous coating thickness. Before and after measurement of the bore diameter enables conclusions about the coating thickness. A software tool specifically developed for coating deposition can transfer this information to a model that predicts the coating deposition as a function of the coating strategy.
Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer
NASA Technical Reports Server (NTRS)
Godoy, William F.; Liu, Xu
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.
CA1 pyramidal cell diversity enabling parallel information processing in the hippocampus
Soltesz, Ivan; Losonczy, Attila
2018-01-01
Hippocampal network operations supporting spatial navigation and declarative memory are traditionally interpreted in a framework where each hippocampal area, such as the dentate gyrus, CA3, and CA1, consists of homogeneous populations of functionally equivalent principal neurons. However, heterogeneity within hippocampal principal cell populations, in particular within pyramidal cells at the main CA1 output node, is increasingly recognized and includes developmental, molecular, anatomical, and functional differences. Here we review recent progress in the delineation of hippocampal principal cell subpopulations by focusing on radially defined subpopulations of CA1 pyramidal cells, and we consider how functional segregation of information streams, in parallel channels with nonuniform properties, could represent a general organizational principle of the hippocampus supporting diverse behaviors. PMID:29593317
An abstract model for radiative transfer in an atmosphere with reflection by the planetary surface
NASA Astrophysics Data System (ADS)
Greenberg, W.; van der Mee, C. V. M.
1985-07-01
A Hilbert-space model is developed that applies to radiative transfer in a homogeneous, plane-parallel planetary atmosphere. Reflection and absorption by the planetary surface are taken into account by imposing a reflective boundary condition. The existence and uniqueness of the solution of this boundary value problem are established by proving the invertibility of a scattering operator using the Fredholm alternative.
Influence of vein fabric on strain distribution and fold kinematics
NASA Astrophysics Data System (ADS)
Torremans, Koen; Muchez, Philippe; Sintubin, Manuel
2014-05-01
Abundant pre-folding, bedding-parallel fibrous dolomite veins in shale are found associated with the Nkana-Mindola stratiform Cu-Co deposit in the Central African Copperbelt, Zambia. These monomineralic veins extend for several meters along strike, with a fibrous infill orthogonal to low-tortuosity vein walls. Growth morphologies vary from antitaxial with a pronounced median surface to asymmetric syntaxial, always with small but quantifiable growth competition. Subsequently, these veins were folded. In this study, we aim to constrain the kinematic fold mechanism by which strain is accommodated in these veins, estimate paleorheology at time of deformation and investigate the influence of vein fabric on deformation during folding. Finally, the influence of the deformation on known metallogenetic stages is assessed. Various deformation styles are observed, ultimately related to vein attitude across tight to close lower-order, hectometre-scale folds. In fold hinges, at low to average dips, veins are (poly-)harmonically to disharmonically folded as parasitic folds in single or multilayer systems. With increasing distance from the fold hinge, parasitic fold amplitude decreases and asymmetry increases. At high dips in the limbs, low-displacement duplication thrusts of veins at low angles to bedding are abundant. Slickenfibres and slickenlines are sub-perpendicular to fold hinges and shallow-dipping slickenfibre-step lineations are parallel to local fold hinge lines. A dip isogon analysis of reconstructed fold geometries prior to homogeneous shortening reveals type 1B parallel folds for the veins and type 1C for the matrix. Two main deformation mechanisms are identified in folded veins. Firstly, undulatory extinction, subgrains and fluid inclusions planes parallel the fibre long axis, with deformation intensity increasing away from the fold hinges, indicate intracrystalline strain accumulation. Secondly, intergranular deformation through bookshelf rotation of fibres, via collective parallel rotation of fibres and shearing along fibre grain boundaries, is clearly observed under cathodoluminescence. We analysed the internal strain distribution by quantifying simple shear strain caused by deflection of the initially orthogonal fibres relative to layer inclination at a given position across the fold. Shear angle, and thus shear strain, steadily increases towards the limbs away from the fold hinge. Comparison of observed shear strain to theoretical distribution for kinematic mechanisms, amongst other lines of evidence, clearly points to pure flexural flow followed by homogeneous shortening. As flexural flow is not the expected kinematic folding mechanism for competent layers in an incompetent shale matrix, our analysis shows that the internal vein fabric in these dolomite veins can exhibit a first-order influence on folding mechanisms. In addition, quantitative analysis shows that these veins acted as rigid objects with high viscosity contrast relative to the incompetent carbonaceous shale, rather than as semi-passive markers. Later folding-related syn-orogenic veins, intensely mineralised with Cu-Co sulphides, are strongly related to deformation of these pre-folding veins. The high viscosity contrast created by the pre-folding fibrous dolomite veins was therefore essential in creating transient permeability for subsequent mineralising stages in the veining history.
Altered thymidylate synthetase in 5-fluorodeoxyuridine-resistant Ehrlich ascites carcinoma cells.
Jastreboff, M M; Kedzierska, B; Rode, W
1983-07-15
Thymidylate synthetase from 5-fluorodeoxyuridine-resistant Ehrlich ascites carcinoma cells was purified to a state close to electrophoretical homogeneity (sp. act. = 1.3 mumoles/min/mg protein) and studied in parallel with the homogeneous preparation of the enzyme from the parental Ehrlich ascites carcinoma cells. The enzyme from the resistant cells compared to that from the parental cells showed: (i) a higher turnover number (at least 91 against 31 min-1), (ii) a higher inhibition constant (19 against 1.9 nM) for FdUMP (a tight-binding inhibitor of both enzymes), (iii) a lower activation energy at temps above 36 degrees (1.37 against 2.59 kcal/mole), and (iv) a lower inhibition constant (26 against 108 microM) for dTMP, inhibiting both enzymes competitively vs dUMP.
A network approach to decentralized coordination of energy production-consumption grids.
Omodei, Elisa; Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems.
An artificial intelligence approach to lithostratigraphic correlation using geophysical well logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, R.A.; Davis, J.C.
1986-01-01
Computer programs for lithostratigraphic correlation of well logs have achieved limited success. Their algorithms are based on an oversimplified view of the manual process used by analysts to establish geologically correct correlations. The programs experience difficulties if the correlated rocks deviate from an ideal geometry of perfectly homogeneous, parallel layers of infinite extent. Artificial intelligence provides a conceptual basis for formulating the task of lithostratigraphic correlation, leading to more realistic procedures. A prototype system using the ''production rule'' approach of expert systems successfully correlates well logs in areas of stratigraphic complexity. Two digitized logs are used per well, one formore » curve matching and the other for lithologic comparison. The software has been successfully used to correlate more than 100,000 ft (30 480 m) of section, through clastic sequences in Louisiana and through carbonate sequences in Kansas. Correlations have been achieved even in the presence of faults, unconformities, facies changes, and lateral variations in bed thickness.« less
Pang, Yong; Yu, Baiying; Vigneron, Daniel B; Zhang, Xiaoliang
2014-02-01
Quadrature coils are often desired in MR applications because they can improve MR sensitivity and also reduce excitation power. In this work, we propose, for the first time, a quadrature array design strategy for parallel transmission at 298 MHz using single-feed circularly polarized (CP) patch antenna technique. Each array element is a nearly square ring microstrip antenna and is fed at a point on the diagonal of the antenna to generate quadrature magnetic fields. Compared with conventional quadrature coils, the single-feed structure is much simple and compact, making the quadrature coil array design practical. Numerical simulations demonstrate that the decoupling between elements is better than -35 dB for all the elements and the RF fields are homogeneous with deep penetration and quadrature behavior in the area of interest. Bloch equation simulation is also performed to simulate the excitation procedure by using an 8-element quadrature planar patch array to demonstrate its feasibility in parallel transmission at the ultrahigh field of 7 Tesla.
Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki
2017-12-04
Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.
Evaluating the Poroelastic Effect on Anisotropic, Organic-Rich, Mudstone Systems
NASA Astrophysics Data System (ADS)
Suarez-Rivera, Roberto; Fjær, Erling
2013-05-01
Understanding the poroelastic effect on anisotropic organic-rich mudstones is of high interest and value for evaluating coupled effects of rock deformation and pore pressure, during drilling, completion and production operations in the oilfield. These applications include modeling and prevention of time-dependent wellbore failure, improved predictions of fracture initiation during hydraulic fracturing operations (Suarez-Rivera et al. Presented at the Canadian Unconventional Resources Conference held in Calgary, Alberta, Canada, 15-17 November 2011. CSUG/SPE 146998 2011), improved understanding of the evolution of pore pressure during basin development, including subsidence and uplift, and the equilibrated effective in situ stress (Charlez, Rock mechanics, vol 2 1997; Katahara and Corrigan, Pressure regimes in sedimentary basins and their prediction: AAPG Memoir, vol 76, pp 73-78 2002; Fjær et al. Petroleum related rock mechanics. 2nd edn 2008). In isotropic rocks, the coupled poro-elastic deformations of the solid framework and the pore fluids are controlled by the Biot and Skempton coefficients. These are the two fundamental properties that relate the rock framework and fluid compressibility and define the magnitude of the poroelastic effect. In transversely isotropic rocks, one desires to understand the variability of these coefficients along the directions parallel and longitudinal to the principal directions of material symmetry (usually the direction of bedding). These types of measurements are complex and uncommon in low-porosity rocks, and particularly problematic and scarce in tight shales. In this paper, we discuss a methodology for evaluating the Biot's coefficient, its variability along the directions parallel and perpendicular to bedding as a function of stress, and the homogenized Skempton coefficient, also as a function of stress. We also predict the pore pressure change that results during undrained compression. Most importantly, we provide values of transverse and longitudinal Biot's coefficients and the homogenized Skempton coefficient for two important North American, gas-producing, organic-rich mudstones. These results could be used for petroleum-related applications.
Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart
2014-03-01
Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.
Cholinesterase activity during embryonic development in the blood-feeding bug Triatoma patagonica.
Visciarelli, E C; Chopa, C Sánchez; Picollo, M I; Ferrero, A A
2011-09-01
Triatoma patagonica Del Ponte (Hemiptera: Reduviidae), a vector of Chagas' disease, is widely distributed in Argentina and is found in sylvatic and peridomiciliary ecotopes, as well as occasionally in human dwellings after the chemical control of Triatoma infestans. Anti-cholinesteratic products can be applied in peridomiciliary areas and thus knowledge of cholinesterase activity during embryonic development in this species might contribute further information relevant to effective chemical control. Cholinesterase activity was characterized by reactions to eserine 10(-5) m, to increasing concentrations of substrate and to varying centrifugal speeds. Acetylcholinesterase activity was detected on day 4 and was significant from day 5. A reduction in cholinesterase activity towards acetylthiocholine (ATC) was observed on days 9 and 10 of development. Cholinesterase activity towards ATC and butyrylthiocholine (BTC) in homogenates of eggs was inhibited by eserine 10(-5) m. The shape of the curve indicating levels of inhibition at different concentrations of ATC was typical of acetylcholinesterase. Activity towards BTC did not appear to be inhibited by excess substrate, which parallels the behaviour of butyrylcholinesterases. Cholinesterase activity towards ATC was reduced in supernatant centrifuged at 15 000 g compared with supernatant centrifuged at 1100 g. The cholinesterase system that hydrolyzes mainly ATC seems to belong to the nervous system, as indicated by its behaviour towards the substrates assayed, its greater insolubility and the fact that it evolves parallel to the development of the nervous system. Knowledge of biochemical changes associated with the development and maturation of the nervous system during embryonic development would contribute to the better understanding of anti-cholinesteratic compounds with ovicidal action that might be used in control campaigns against vectors of Chagas' disease. © 2011 The Authors. Medical and Veterinary Entomology © 2011 The Royal Entomological Society.
Simulation of the UV-radiation at the Martian surface
NASA Astrophysics Data System (ADS)
Kolb, C.; Stimpfl, P.; Krenn, H.; Lammer, H.; Kargl, G.; Abart, R.; Patel, M. R.
The UV-radiation at the Martian surface is for several reasons of importance. UV radiation can cause specific damages in the DNA-containing living systems and is involved in the formation of catalytically produced oxidants such as superoxide ions and peroxides. These are capable to oxidize and subsequently destroy organic matter. Lab simulations are necessary to investigate and understand the effects of organic matter removal at the Martian surface. We designed a radiation apparatus which simulates the solar spectrum at the Martian surface between 200 and 700 nm. The system consists of an UV-enhanced xenon arc lamp and special exchangeable filter-sets and mirrors for simulating the effects of the Martian atmospheric column and dust loading. A special collimating system bundles the final parallel beam so that the intensity at the target spot is independent from the distance between the ray source and the sample. The system was calibrated by means of an optical photo-spectrometer to align the ray output with the theoretical target spectrum and to ensure spectral homogeneity. We present preliminary data on calibration and performance of our system, which is integrated in the Austrian Mars simulation facility.
An Investigation of the Impact of Guessing on Coefficient α and Reliability
2014-01-01
Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.
Spatiotemporal dynamics of a digital phase-locked loop based coupled map lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Tanmoy, E-mail: tbanerjee@phys.buruniv.ac.in; Paul, Bishwajit; Sarkar, B. C.
2014-03-15
We explore the spatiotemporal dynamics of a coupled map lattice (CML) system, which is realized with a one dimensional array of locally coupled digital phase-locked loops (DPLLs). DPLL is a nonlinear feedback-controlled system widely used as an important building block of electronic communication systems. We derive the phase-error equation of the spatially extended system of coupled DPLLs, which resembles a form of the equation of a CML system. We carry out stability analysis for the synchronized homogeneous solutions using the circulant matrix formalism. It is shown through extensive numerical simulations that with the variation of nonlinearity parameter and coupling strengthmore » the system shows transitions among several generic features of spatiotemporal dynamics, viz., synchronized fixed point solution, frozen random pattern, pattern selection, spatiotemporal intermittency, and fully developed spatiotemporal chaos. We quantify the spatiotemporal dynamics using quantitative measures like average quadratic deviation and spatial correlation function. We emphasize that instead of using an idealized model of CML, which is usually employed to observe the spatiotemporal behaviors, we consider a real world physical system and establish the existence of spatiotemporal chaos and other patterns in this system. We also discuss the importance of the present study in engineering application like removal of clock-skew in parallel processors.« less
Spatiotemporal dynamics of a digital phase-locked loop based coupled map lattice system.
Banerjee, Tanmoy; Paul, Bishwajit; Sarkar, B C
2014-03-01
We explore the spatiotemporal dynamics of a coupled map lattice (CML) system, which is realized with a one dimensional array of locally coupled digital phase-locked loops (DPLLs). DPLL is a nonlinear feedback-controlled system widely used as an important building block of electronic communication systems. We derive the phase-error equation of the spatially extended system of coupled DPLLs, which resembles a form of the equation of a CML system. We carry out stability analysis for the synchronized homogeneous solutions using the circulant matrix formalism. It is shown through extensive numerical simulations that with the variation of nonlinearity parameter and coupling strength the system shows transitions among several generic features of spatiotemporal dynamics, viz., synchronized fixed point solution, frozen random pattern, pattern selection, spatiotemporal intermittency, and fully developed spatiotemporal chaos. We quantify the spatiotemporal dynamics using quantitative measures like average quadratic deviation and spatial correlation function. We emphasize that instead of using an idealized model of CML, which is usually employed to observe the spatiotemporal behaviors, we consider a real world physical system and establish the existence of spatiotemporal chaos and other patterns in this system. We also discuss the importance of the present study in engineering application like removal of clock-skew in parallel processors.
Spatiotemporal dynamics of a digital phase-locked loop based coupled map lattice system
NASA Astrophysics Data System (ADS)
Banerjee, Tanmoy; Paul, Bishwajit; Sarkar, B. C.
2014-03-01
We explore the spatiotemporal dynamics of a coupled map lattice (CML) system, which is realized with a one dimensional array of locally coupled digital phase-locked loops (DPLLs). DPLL is a nonlinear feedback-controlled system widely used as an important building block of electronic communication systems. We derive the phase-error equation of the spatially extended system of coupled DPLLs, which resembles a form of the equation of a CML system. We carry out stability analysis for the synchronized homogeneous solutions using the circulant matrix formalism. It is shown through extensive numerical simulations that with the variation of nonlinearity parameter and coupling strength the system shows transitions among several generic features of spatiotemporal dynamics, viz., synchronized fixed point solution, frozen random pattern, pattern selection, spatiotemporal intermittency, and fully developed spatiotemporal chaos. We quantify the spatiotemporal dynamics using quantitative measures like average quadratic deviation and spatial correlation function. We emphasize that instead of using an idealized model of CML, which is usually employed to observe the spatiotemporal behaviors, we consider a real world physical system and establish the existence of spatiotemporal chaos and other patterns in this system. We also discuss the importance of the present study in engineering application like removal of clock-skew in parallel processors.
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha
2006-11-01
Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.
Johansson, Johannes; Wårdell, Karin; Hemm, Simone
2018-01-01
The success of deep brain stimulation (DBS) relies primarily on the localization of the implanted electrode. Its final position can be chosen based on the results of intraoperative microelectrode recording (MER) and stimulation tests. The optimal position often differs from the final one selected for chronic stimulation with the DBS electrode. The aim of the study was to investigate, using finite element method (FEM) modeling and simulations, whether lead design, electrical setup, and operating modes induce differences in electric field (EF) distribution and in consequence, the clinical outcome. Finite element models of a MER system and a chronic DBS lead were developed. Simulations of the EF were performed for homogenous and patient-specific brain models to evaluate the influence of grounding (guide tube vs. stimulator case), parallel MER leads, and non-active DBS contacts. Results showed that the EF is deformed depending on the distance between the guide tube and stimulating contact. Several parallel MER leads and the presence of the non-active DBS contacts influence the EF distribution. The DBS EF volume can cover the intraoperatively produced EF, but can also extend to other anatomical areas. In conclusion, EF deformations between stimulation tests and DBS should be taken into consideration as they can alter the clinical outcome. PMID:29415442
He, Guanglin; Wang, Zheng; Wang, Mengge; Luo, Tao; Liu, Jing; Zhou, You; Gao, Bo; Hou, Yiping
2018-06-04
Ancestry inference based on single nucleotide polymorphism (SNP) with marked allele frequency differences in diverse populations (called ancestry-informative SNP, AISNP) is rapidly developed with the technology advancements of massively parallel sequencing (MPS). Despite the decade of exploration and broad public interest in the peopling of East-Asians, the genetic landscape of Chinese Silk Road populations based on the AISNPs is still little known. In this work, 206 unrelated individuals from Chinese Uyghur and Hui populations were firstly genotyped by 165 AISNPs (The Precision ID Ancestry Panel) using the Ion Torrent PGM system. The ethnic origin of two investigated populations and population structures and genetic relationships were subsequently investigated. The 165 AISNPs panel not only can differentiate Uyghur and Hui populations but also has potential applications in individual identification. Comprehensive population comparisons and admixture estimates demonstrated a predominantly higher European-related ancestry (36.30%) in Uyghurs than Huis (3.66%). Overall, the Precision ID Ancestry Panel can provide good resolution at the intercontinental level, but has limitations on the genetic homogeneous populations, such as the Hui and Han. Additional population-specific AISNPs remain necessary to get better-scale resolution within geographically proximate populations in East Asia. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Fantz, U; Franzen, P; Kraus, W; Falter, H D; Berger, M; Christ-Koch, S; Fröschle, M; Gutser, R; Heinemann, B; Martens, C; McNeely, P; Riedl, R; Speth, E; Wünderlich, D
2008-02-01
The international fusion experiment ITER requires for the plasma heating and current drive a neutral beam injection system based on negative hydrogen ion sources at 0.3 Pa. The ion source must deliver a current of 40 A D(-) for up to 1 h with an accelerated current density of 200 Am/(2) and a ratio of coextracted electrons to ions below 1. The extraction area is 0.2 m(2) from an aperture array with an envelope of 1.5 x 0.6 m(2). A high power rf-driven negative ion source has been successfully developed at the Max-Planck Institute for Plasma Physics (IPP) at three test facilities in parallel. Current densities of 330 and 230 Am/(2) have been achieved for hydrogen and deuterium, respectively, at a pressure of 0.3 Pa and an electron/ion ratio below 1 for a small extraction area (0.007 m(2)) and short pulses (<4 s). In the long pulse experiment, equipped with an extraction area of 0.02 m(2), the pulse length has been extended to 3600 s. A large rf source, with the width and half the height of the ITER source but without extraction system, is intended to demonstrate the size scaling and plasma homogeneity of rf ion sources. The source operates routinely now. First results on plasma homogeneity obtained from optical emission spectroscopy and Langmuir probes are very promising. Based on the success of the IPP development program, the high power rf-driven negative ion source has been chosen recently for the ITER beam systems in the ITER design review process.
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-12-31
Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-02-01
Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less
Impact of automatization in temperature series in Spain and comparison with the POST-AWS dataset
NASA Astrophysics Data System (ADS)
Aguilar, Enric; López-Díaz, José Antonio; Prohom Duran, Marc; Gilabert, Alba; Luna Rico, Yolanda; Venema, Victor; Auchmann, Renate; Stepanek, Petr; Brandsma, Theo
2016-04-01
Climate data records are most of the times affected by inhomogeneities. Especially inhomogeneities introducing network-wide biases are sometimes related to changes happening almost simultaneously in an entire network. Relative homogenization is difficult in these cases, especially at the daily scale. A good example of this is the substitution of manual observations (MAN) by automatic weather stations (AWS). Parallel measurements (i.e. records taken at the same time with the old (MAN) and new (AWS) sensors can provide an idea of the bias introduced and help to evaluate the suitability of different correction approaches. We present here a quality controlled dataset compiled under the DAAMEC Project, comprising 46 stations across Spain and over 85,000 parallel measurements (AWS-MAN) of daily maximum and minimum temperature. We study the differences between both sensors and compare it with the available metadata to account for internal inhomogeneities. The differences between both systems vary much across stations, with patterns more related to their particular settings than to climatic/geographical reasons. The typical median biases (AWS-MAN) by station (comprised between the interquartile range) oscillate between -0.2°C and 0.4 in daily maximum temperature and between -0.4°C and 0.2°C in daily minimum temperature. These and other results are compared with a larger network, the Parallel Observations Scientific Team, a working group of the International Surface Temperatures Initiative (ISTI-POST) dataset, which comprises our stations, as well as others from different countries in America, Asia and Europe.
Parallel processing and expert systems
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Lau, Sonie
1991-01-01
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.
Effect of regional slope on drainage networks
NASA Astrophysics Data System (ADS)
Phillips, Loren F.; Schumm, S. A.
1987-09-01
Drainage networks that develop under conditions of no structural control and homogeneous lithology are generally dendritic, depending upon the shape and inclination of the surface on which they form. An experimental study was designed to investigate the effect of an increase of slope on the evolution and development of dendritic drainage patterns. As slope steepens, the pattern changes from dendritic at 1% slope, to subdendritic at 2%, to subparallel at 3%, to parallel at 5% and higher. The change from a dendritic-type pattern to a parallel-type pattern occurs at a low slope, between 2% and 3%, and primary channel junction angles decrease abruptly from about 60° to 43°. *Present address: U.S. Army Environmental Hygiene Agency, Attn: HSHB-ME-WM, Aberdeen Proving Ground, Maryland 21010-5422
Machinability of some dentin simulating materials.
Möllersten, L
1985-01-01
Machinability in low speed drilling was investigated for pure aluminium, Frasaco teeth, ivory, plexiglass and human dentin. The investigation was performed in order to find a suitable test material for drilling experiments using paralleling instruments. A material simulating human dentin in terms of cuttability at low drilling speeds was sought. Tests were performed using a specially designed apparatus. Holes to a depth of 2 mm were drilled with a twist drill using a constant feeding force. The time required was registered. The machinability of the materials tested was determined by direct comparison of the drilling times. As regards cuttability, first aluminium and then ivory were found to resemble human dentin most closely. By comparing drilling time variances the homogeneity of the materials tested was estimated. Aluminium, Frasaco teeth and plexiglass demonstrated better homogeneity than ivory and human dentin.
Partitioning in parallel processing of production systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oflazer, K.
1987-01-01
This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.
Jinno, Naoya; Hashimoto, Masahiko; Tsukagoshi, Kazuhiko
2011-01-01
A capillary chromatography system has been developed based on the tube radial distribution of the carrier solvents using an open capillary tube and a water-acetonitrile-ethyl acetate mixture carrier solution. This tube radial distribution chromatography (TRDC) system works under laminar flow conditions. In this study, a phase diagram for the ternary mixture carrier solvents of water, acetonitrile, and ethyl acetate was constructed. The phase diagram that included a boundary curve between homogeneous and heterogeneous solutions was considered together with the component ratios of the solvents in the homogeneous carrier solutions required for the TRDC system. It was found that the TRDC system performed well with homogeneous solutions having component ratios of the solvents that were positioned near the homogeneous-heterogeneous solution boundary of the phase diagram. For preparing the carrier solutions of water-hydrophilic/hydrophobic organic solvents for the TRDC system, we used for the first time methanol, ethanol, 1,4-dioxane, and 1-propanol, instead of acetonitrile (hydrophilic organic solvent), as well as chloroform and 1-butanol, instead of ethyl acetate (hydrophobic organic solvent). The homogeneous ternary mixture carrier solutions were prepared near the homogeneous-heterogeneous solution boundary. Analyte mixtures of 2,6-naphthalenedisulfonic acid and 1-naphthol were separated with the TRDC system using these homogeneous ternary mixture carrier solutions. The pressure change in the capillary tube under laminar flow conditions might alter the carrier solution from homogeneous in the batch vessel to heterogeneous, thus affecting the tube radial distribution of the solvents in the capillary tube.
NASA Astrophysics Data System (ADS)
Guo, L.; Huang, H.; Gaston, D.; Redden, G. D.; Fox, D. T.; Fujita, Y.
2010-12-01
Inducing mineral precipitation in the subsurface is one potential strategy for immobilizing trace metal and radionuclide contaminants. Generating mineral precipitates in situ can be achieved by manipulating chemical conditions, typically through injection or in situ generation of reactants. How these reactants transport, mix and react within the medium controls the spatial distribution and composition of the resulting mineral phases. Multiple processes, including fluid flow, dispersive/diffusive transport of reactants, biogeochemical reactions and changes in porosity-permeability, are tightly coupled over a number of scales. Numerical modeling can be used to investigate the nonlinear coupling effects of these processes which are quite challenging to explore experimentally. Many subsurface reactive transport simulators employ a de-coupled or operator-splitting approach where transport equations and batch chemistry reactions are solved sequentially. However, such an approach has limited applicability for biogeochemical systems with fast kinetics and strong coupling between chemical reactions and medium properties. A massively parallel, fully coupled, fully implicit Reactive Transport simulator (referred to as “RAT”) based on a parallel multi-physics object-oriented simulation framework (MOOSE) has been developed at the Idaho National Laboratory. Within this simulator, systems of transport and reaction equations can be solved simultaneously in a fully coupled, fully implicit manner using the Jacobian Free Newton-Krylov (JFNK) method with additional advanced computing capabilities such as (1) physics-based preconditioning for solution convergence acceleration, (2) massively parallel computing and scalability, and (3) adaptive mesh refinements for 2D and 3D structured and unstructured mesh. The simulator was first tested against analytical solutions, then applied to simulating induced calcium carbonate mineral precipitation in 1D columns and 2D flow cells as analogs to homogeneous and heterogeneous porous media, respectively. In 1D columns, calcium carbonate mineral precipitation was driven by urea hydrolysis catalyzed by urease enzyme, and in 2D flow cells, calcium carbonate mineral forming reactants were injected sequentially, forming migrating reaction fronts that are typically highly nonuniform. The RAT simulation results for the spatial and temporal distributions of precipitates, reaction rates and major species in the system, and also for changes in porosity and permeability, were compared to both laboratory experimental data and computational results obtained using other reactive transport simulators. The comparisons demonstrate the ability of RAT to simulate complex nonlinear systems and the advantages of fully coupled approaches, over de-coupled methods, for accurate simulation of complex, dynamic processes such as engineered mineral precipitation in subsurface environments.
A network approach to decentralized coordination of energy production-consumption grids
Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems. PMID:29364962
2016-05-31
In addition to being used for off‐road mobility studies, Chrono is being used by UC‐San Diego for the motion of molecules as well as by NASA ... gov . labs. This effort has continued as a series of twice a year meetings with a continually increasing number of participants. We are well... NASA , Japan Aerospace Exploration Agency, Caterpillar, P&H Mining, MSC.Software, Simertis Gmbh, BAE Systems, Eaton Corporation, Rescale
NASA Astrophysics Data System (ADS)
Okita, Shin; Verestek, Wolfgang; Sakane, Shinji; Takaki, Tomohiro; Ohno, Munekazu; Shibuta, Yasushi
2017-09-01
Continuous processes of homogeneous nucleation, solidification and grain growth are spontaneously achieved from an undercooled iron melt without any phenomenological parameter in the molecular dynamics (MD) simulation with 12 million atoms. The nucleation rate at the critical temperature is directly estimated from the atomistic configuration by cluster analysis to be of the order of 1034 m-3 s-1. Moreover, time evolution of grain size distribution during grain growth is obtained by the combination of Voronoi and cluster analyses. The grain growth exponent is estimated to be around 0.3 from the geometric average of the grain size distribution. Comprehensive understanding of kinetic properties during continuous processes is achieved in the large-scale MD simulation by utilizing the high parallel efficiency of a graphics processing unit (GPU), which is shedding light on the fundamental aspects of production processes of materials from the atomistic viewpoint.
Electromagnetic waves in a model with Chern-Simons potential.
Pis'mak, D Yu; Pis'mak, Yu M; Wegner, F J
2015-07-01
We investigated the appearance of Chern-Simons terms in electrodynamics at the surface or interface of materials. The requirement of locality, gauge invariance, and renormalizability in this model is imposed. Scattering and reflection of electromagnetic waves in three different homogeneous layers of media is determined. Snell's law is preserved. However, the transmission and reflection coefficient depend on the strength of the Chern-Simons interaction (connected with Hall conductance), and parallel and perpendicular components are mixed.
Selecting for Function: Solution Synthesis of Magnetic Nanopropellers
2013-01-01
We show that we can select magnetically steerable nanopropellers from a set of carbon coated aggregates of magnetic nanoparticles using weak homogeneous rotating magnetic fields. The carbon coating can be functionalized, enabling a wide range of applications. Despite their arbitrary shape, all nanostructures propel parallel to the vector of rotation of the magnetic field. We use a simple theoretical model to find experimental conditions to select nanopropellers which are predominantly smaller than previously published ones. PMID:24127909
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, M.; Mookerjea, S.
1986-05-01
Incorporation of (/sup 14/C)-mannose to dolichol phosphate mannose, dolichol pyrophosphate oligosaccharide and N-linked glycoproteins in cultured hepatocytes was increased by dexamethasone. Nucleotide pyrophosphatases are now measured to investigate possible control of glycosylation by the nucleotide sugar pools. Dexamethasone caused about 2 fold increase of UDP-GlcNAc and GDP-Man pyrophosphatase activity which is evident as early as 4 hr and increased up to 12 hr of incubation. The K/sub m/ for UDP-GlcNAc and GDP-Man were respectively 0.43 mM and 0.47 mM in homogenate membrane and the values remained unchanged by dexamethasone treatment. However the V/sub max/ of the enzymes were increased withmore » both UDP-GlcNAc and GDP-Man. The broad pH optima of the enzymes (pH 8 to 10) indicated their alkaline nature. Mixing experiments of the cell homogenates from control and dexamethasone treated cells showed that UDP-GlcNAc and GDP-Man pyrophosphatase activities were additive which ruled out the possibility of presence of any activator or removal of any inhibitor due to dexamethasone. The parallel increase of nucleotide pyrophosphatase and dolichol linked pathway by dexamethasone does not support the possibility that stimulation of glycoprotein synthesis by dexamethasone is mediated by transfer of nucleotide sugars towards dolichol saccharides.« less
Mapping trace element distribution in fossil teeth and bone with LA-ICP-MS
NASA Astrophysics Data System (ADS)
Hinz, E. A.; Kohn, M. J.
2009-12-01
Trace element profiles were measured in fossil bones and teeth from the late Pleistocene (c. 25 ka) Merrell locality, Montana, USA, by using laser-ablation ICP-MS. Laser-ablation ICP-MS can collect element counts along predefined tracks on a sample’s surface using a constant ablation speed allowing for rapid spatial sampling of element distribution. Key elements analyzed included common divalent cations (e.g. Sr, Zn, Ba), a suite of REE (La, Ce, Nd, Sm, Eu, Yb), and U, in addition to Ca for composition normalization and standardization. In teeth, characteristic diffusion penetration distances for all trace elements are at least a factor of 4 greater in traverses parallel to the dentine-enamel interface (parallel to the growth axis of the tooth) than perpendicular to the interface. Multiple parallel traverses in sections parallel and perpendicular to the tooth growth axis were transformed into trace element maps, and illustrate greater uptake of all trace elements along the central axis of dentine compared to areas closer to enamel, or within the enamel itself. Traverses in bone extending from the external surface, through the thickness of cortical bone and several mm into trabecular bone show major differences in trace element uptake compared to teeth: U and Sr are homogeneous, whereas all REE show a kinked profile with high concentrations on outer surfaces that decrease by several orders of magnitude within a few mm inward. The Eu anomaly increases uniformly from the outer edge of bone inward, whereas the Ce anomaly decreases slightly. These observations point to major structural anisotropies in trace element transport and uptake during fossilization, yet transport and uptake of U and REE are not resolvably different. In contrast, transport and uptake of U in bone must proceed orders of magnitude faster than REE as U is homogeneous whereas REE exhibit strong gradients. The kinked REE profiles in bone unequivocally indicate differential transport rates, consistent with a double-medium diffusion model in which microdomains with slow diffusivities are bounded by fast-diffusing pathways.
Stan, Claudiu A; Tang, Sindy K Y; Bishop, Kyle J M; Whitesides, George M
2011-02-10
The freezing of water can initiate at electrically conducting electrodes kept at a high electric potential or at charged electrically insulating surfaces. The microscopic mechanisms of these phenomena are unknown, but they must involve interactions between water molecules and electric fields. This paper investigates the effect of uniform electric fields on the homogeneous nucleation of ice in supercooled water. Electric fields were applied across drops of water immersed in a perfluorinated liquid using a parallel-plate capacitor; the drops traveled in a microchannel and were supercooled until they froze due to the homogeneous nucleation of ice. The distribution of freezing temperatures of drops depended on the rate of nucleation of ice, and the sensitivity of measurements allowed detection of changes by a factor of 1.5 in the rate of nucleation. Sinusoidal alternation of the electric field at frequencies from 3 to 100 kHz prevented free ions present in water from screening the electric field in the bulk of drops. Uniform electric fields in water with amplitudes up to (1.6 ± 0.4) × 10(5) V/m neither enhanced nor suppressed the homogeneous nucleation of ice. Estimations based on thermodynamic models suggest that fields in the range of 10(7)-10(8) V/m might cause an observable increase in the rate of nucleation.
Finite-time consensus for controlled dynamical systems in network
NASA Astrophysics Data System (ADS)
Zoghlami, Naim; Mlayeh, Rhouma; Beji, Lotfi; Abichou, Azgal
2018-04-01
The key challenges in networked dynamical systems are the component heterogeneities, nonlinearities, and the high dimension of the formulated vector of state variables. In this paper, the emphasise is put on two classes of systems in network include most controlled driftless systems as well as systems with drift. For each model structure that defines homogeneous and heterogeneous multi-system behaviour, we derive protocols leading to finite-time consensus. For each model evolving in networks forming a homogeneous or heterogeneous multi-system, protocols integrating sufficient conditions are derived leading to finite-time consensus. Likewise, for the networking topology, we make use of fixed directed and undirected graphs. To prove our approaches, finite-time stability theory and Lyapunov methods are considered. As illustrative examples, the homogeneous multi-unicycle kinematics and the homogeneous/heterogeneous multi-second order dynamics in networks are studied.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
Accelerating DNA analysis applications on GPU clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less
Automatic Management of Parallel and Distributed System Resources
NASA Technical Reports Server (NTRS)
Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.
1990-01-01
Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Pohlert, Thorsten; Hillebrand, Gudrun; Breitung, Vera
2011-06-01
This study focusses on the effect of sampling techniques for suspended matter in stream water on subsequent particle-size distribution and concentrations of total organic carbon and selected persistent organic pollutants. The key questions are whether differences between the sampling techniques are due to the separation principle of the devices or due to the difference between time-proportional versus integral sampling. Several multivariate homogeneity tests were conducted on an extensive set of field-data that covers the period from 2002 to 2007, when up to three different sampling techniques were deployed in parallel at four monitoring stations of the River Rhine. The results indicate homogeneity for polychlorinated biphenyls, but significant effects due to the sampling techniques on particle-size, organic carbon and hexachlorobenzene. The effects can be amplified depending on the site characteristics of the monitoring stations.
Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling
2015-12-01
The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright © 2015. Published by Elsevier B.V.
Efficient characterisation of large deviations using population dynamics
NASA Astrophysics Data System (ADS)
Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.
2018-05-01
We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.
Leung, K.N.
1996-10-08
An ion implantation device for creating a large diameter, homogeneous, ion beam is described, as well as a method for creating same, wherein the device is characterized by extraction of a diverging ion beam and its conversion by ion beam optics to an essentially parallel ion beam. The device comprises a plasma or ion source, an anode and exit aperture, an extraction electrode, a divergence-limiting electrode and an acceleration electrode, as well as the means for connecting a voltage supply to the electrodes. 6 figs.
Leung, Ka-Ngo
1996-01-01
An ion implantation device for creating a large diameter, homogeneous, ion beam is described, as well as a method for creating same, wherein the device is characterized by extraction of a diverging ion beam and its conversion by ion beam optics to an essentially parallel ion beam. The device comprises a plasma or ion source, an anode and exit aperture, an extraction electrode, a divergence-limiting electrode and an acceleration electrode, as well as the means for connecting a voltage supply to the electrodes.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
All-fiber Devices Based on Photonic Crystal Fibers with Integrated Electrodes
NASA Astrophysics Data System (ADS)
Chesini, Giancarlo; Cordeiro, Cristiano M. B.; de Matos, Christiano J. S.; Fokine, Michael; Carvalho, Isabel C. S.; Knighf, Jonathan C.
2008-10-01
A special kind of microstructured optical fiber was proposed and manufactured where, as well as the holey region (solid core and silica-air cladding), the fiber has also two large holes for electrode insertion. Bi-Sn and Au-Sn alloys were selectively inserted in those holes forming two parallel, continuous and homogeneous internal electrodes. We demonstrated the production of a monolithic device and its use to externally control some of the guidance properties (e.g. polarization) of the fiber.
NASA Astrophysics Data System (ADS)
Nurlybek, A. Ispulov; Abdul, Qadir; M, A. Shah; Ainur, K. Seythanova; Tanat, G. Kissikov; Erkin, Arinov
2016-03-01
The thermoelastic wave propagation in a tetragonal syngony anisotropic medium of classes 4, 4/m having heterogeneity along z axis has been investigated by employing matrizant method. This medium has an axis of second-order symmetry parallel to z axis. In the case of the fourth-order matrix coefficients, the problems of wave refraction and reflection on the interface of homogeneous anisotropic thermoelastic mediums are solved analytically.
Pawar, Shashikant S; Arakeri, Jaywant H
2016-08-01
Frequency spectra obtained from the measurements of light intensity and angle of arrival (AOA) of parallel laser light propagating through the axially homogeneous, axisymmetric buoyancy-driven turbulent flow at high Rayleigh numbers in a long (length-to-diameter ratio of about 10) vertical tube are reported. The flow is driven by an unstable density difference created across the tube ends using brine and fresh water. The highest Rayleigh number is about 8×109. The aim of the present work is to find whether the conventional Obukhov-Corrsin scaling or Bolgiano-Obukhov (BO) scaling is obtained for the intensity and AOA spectra in the case of light propagation in a buoyancy-driven turbulent medium. Theoretical relations for the frequency spectra of log amplitude and AOA fluctuations developed for homogeneous isotropic turbulent media are modified for the buoyancy-driven flow in the present case to obtain the asymptotic scalings for the high and low frequency ranges. For low frequencies, the spectra of intensity and vertical AOA fluctuations obtained from measurements follow BO scaling, while scaling for the spectra of horizontal AOA fluctuations shows a small departure from BO scaling.
Crowded visual search in children with normal vision and children with visual impairment.
Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke
2014-03-01
This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search. Copyright © 2014 Elsevier B.V. All rights reserved.
Performance of external and internal coil configurations for prostate investigations at 7 Tesla
Metzger, Gregory J.; van de Moortele, Pierre-Francois; Akgun, Can; Snyder, Carl J.; Moeller, Steen; Strupp, John; Andersen, Peter; Shrivastava, Devashish; Vaughan, Tommy; Ugurbil, Kamil; Adriany, Gregor
2010-01-01
Three different coil configurations were evaluated through simulation and experimentally to determine safe operating limits and evaluate subject size dependent performance for prostate imaging at 7 Tesla. The coils included a transceiver endorectal coil (trERC), a 16 channel transceiver external surface array (trESA) and a trESA combined with a receive-only ERC (trESA+roERC). While the transmit B1 (B1+) homogeneity was far superior for the trESA, the maximum achievable B1+ is subject size dependent and limited by transmit chain losses and amplifier performance. For the trERC, limitations in transmit homogeneity greatly compromised image quality and limited coverage of the prostate. Despite these challenges, the high peak B1+ close to the trERC and subject size independent performance provides potential advantages especially for spectroscopic localization where high bandwidth RF pulses are required. On the receive side, the combined trESA+roERC provided the highest SNR and improved homogeneity over the trERC resulting in better visualization of the prostate and surrounding anatomy. In addition, the parallel imaging performance of the trESA+roERC holds strong promise for diffusion weighted imaging and dynamic contrast enhanced MRI. PMID:20740657
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shemon, Emily R.; Smith, Micheal A.; Lee, Changho
2016-02-16
PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
NASA Astrophysics Data System (ADS)
Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.
2017-06-01
In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.
Fabricating fiber-reinforced composite posts.
Manhart, Jürgen
2011-03-01
Endodontic posts do not increase the strength of the remaining tooth structure in endodontically treated teeth. On the contrary, depending on the post design employed (tapered versus parallel-sided), the root can be weakened relative to the amount of tooth removed during preparation. In many cases, if there has been a high degree of damage to the clinical crown, conservative preparation for an anatomic tapered (biomimetic) post with the incorporation of a ferrule on solid tooth structure is necessary to protect the reaming root structure as well as for the long-term retention of the composite resin core and the definitive restoration. Adhesively luted endodontic posts reinforced with glass or quartz fiber lead to better homogeneous tension distribution when loaded than rigid metal or zirconium oxide ceramic posts. Fiber-reinforced posts also possess advantageous optical properties over metal or metal oxide post systems. The clinician should realize that there are admittedly substantial differences in the mechanical loading capacity of the different fiber-reinforced endodontic posts and should be aware of such differences in order to research and select a suitable post system for use.
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Orthogonality Measurement for Homogenous Projects-Bases
ERIC Educational Resources Information Center
Ivan, Ion; Sandu, Andrei; Popa, Marius
2009-01-01
The homogenous projects-base concept is defined. Next, the necessary steps to create a homogenous projects-base are presented. A metric system is built, which then will be used for analyzing projects. The indicators which are meaningful for analyzing a homogenous projects-base are selected. The given hypothesis is experimentally verified. The…
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Superfluid transition of homogeneous and trapped two-dimensional Bose gases.
Holzmann, Markus; Baym, Gordon; Blaizot, Jean-Paul; Laloë, Franck
2007-01-30
Current experiments on atomic gases in highly anisotropic traps present the opportunity to study in detail the low temperature phases of two-dimensional inhomogeneous systems. Although, in an ideal gas, the trapping potential favors Bose-Einstein condensation at finite temperature, interactions tend to destabilize the condensate, leading to a superfluid Kosterlitz-Thouless-Berezinskii phase with a finite superfluid mass density but no long-range order, as in homogeneous fluids. The transition in homogeneous systems is conveniently described in terms of dissociation of topological defects (vortex-antivortex pairs). However, trapped two-dimensional gases are more directly approached by generalizing the microscopic theory of the homogeneous gas. In this paper, we first derive, via a diagrammatic expansion, the scaling structure near the phase transition in a homogeneous system, and then study the effects of a trapping potential in the local density approximation. We find that a weakly interacting trapped gas undergoes a Kosterlitz-Thouless-Berezinskii transition from the normal state at a temperature slightly below the Bose-Einstein transition temperature of the ideal gas. The characteristic finite superfluid mass density of a homogeneous system just below the transition becomes strongly suppressed in a trapped gas.
Apparatus and methods for cooling and sealing rotary helical screw compressors
Fresco, A.N.
1997-08-05
In a compression system which incorporates a rotary helical screw compressor, and for any type of gas or refrigerant, the working liquid oil is atomized through nozzles suspended in, and parallel to, the suction gas flow, or alternatively the nozzles are mounted on the suction piping. In either case, the aim is to create positively a homogeneous mixture of oil droplets to maximize the effectiveness of the working liquid oil in improving the isothermal and volumetric efficiencies. The oil stream to be atomized may first be degassed at compressor discharge pressure by heating within a pressure vessel and recovering the energy added by using the outgoing oil stream to heat the incoming oil stream. The stripped gas is typically returned to the compressor discharge flow. In the preferred case, the compressor rotors both contain a hollow cavity through which working liquid oil is injected into channels along the edges of the rotors, thereby forming a continuous and positive seal between the rotor edges and the compressor casing. In the alternative method, working liquid oil is injected either in the same direction as the rotor rotation or counter to rotor rotation through channels in the compressor casing which are tangential to the rotor edges and parallel to the rotor center lines or alternatively the channel paths coincide with the helical path of the rotor edges. 14 figs.
Apparatus and methods for cooling and sealing rotary helical screw compressors
Fresco, Anthony N.
1997-01-01
In a compression system which incorporates a rotary helical screw compressor, and for any type of gas or refrigerant, the working liquid oil is atomized through nozzles suspended in, and parallel to, the suction gas flow, or alternatively the nozzles are mounted on the suction piping. In either case, the aim is to create positively a homogeneous mixture of oil droplets to maximize the effectiveness of the working liquid oil in improving the isothermal and volumetric efficiencies. The oil stream to be atomized may first be degassed at compressor discharge pressure by heating within a pressure vessel and recovering the energy added by using the outgoing oil stream to heat the incoming oil stream. The stripped gas is typically returned to the compressor discharge flow. In the preferred case, the compressor rotors both contain a hollow cavity through which working liquid oil is injected into channels along the edges of the rotors, thereby forming a continuous and positive seal between the rotor edges and the compressor casing. In the alternative method, working liquid oil is injected either in the same direction as the rotor rotation or counter to rotor rotation through channels in the compressor casing which are tangential to the rotor edges and parallel to the rotor centerlines or alternatively the channel paths coincide with the helical path of the rotor edges.
NASA Astrophysics Data System (ADS)
Colas, Laurent; Lu, Ling-Feng; Křivská, Alena; Jacquot, Jonathan; Hillairet, Julien; Helou, Walid; Goniche, Marc; Heuraux, Stéphane; Faudot, Eric
2017-02-01
We investigate theoretically how sheath radio-frequency (RF) oscillations relate to the spatial structure of the near RF parallel electric field E ∥ emitted by ion cyclotron (IC) wave launchers. We use a simple model of slow wave (SW) evanescence coupled with direct current (DC) plasma biasing via sheath boundary conditions in a 3D parallelepiped filled with homogeneous cold magnetized plasma. Within a ‘wide-sheath’ asymptotic regime, valid for large-amplitude near RF fields, the RF part of this simple RF + DC model becomes linear: the sheath oscillating voltage V RF at open field line boundaries can be re-expressed as a linear combination of individual contributions by every emitting point in the input field map. SW evanescence makes individual contributions all the larger as the wave emission point is located closer to the sheath walls. The decay of |V RF| with the emission point/sheath poloidal distance involves the transverse SW evanescence length and the radial protrusion depth of lateral boundaries. The decay of |V RF| with the emitter/sheath parallel distance is quantified as a function of the parallel SW evanescence length and the parallel connection length of open magnetic field lines. For realistic geometries and target SOL plasmas, poloidal decay occurs over a few centimeters. Typical parallel decay lengths for |V RF| are found to be smaller than IC antenna parallel extension. Oscillating sheath voltages at IC antenna side limiters are therefore mainly sensitive to E ∥ emission by active or passive conducting elements near these limiters, as suggested by recent experimental observations. Parallel proximity effects could also explain why sheath oscillations persist with antisymmetric strap toroidal phasing, despite the parallel antisymmetry of the radiated field map. They could finally justify current attempts at reducing the RF fields induced near antenna boxes to attenuate sheath oscillations in their vicinity.
Performance of the Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.
Rombouts, Steffi J E; Nijkamp, Maarten W; van Dijck, Willemijn P M; Brosens, Lodewijk A A; Konings, Maurits; van Hillegersberg, R; Borel Rinkes, Inne H M; Hagendoorn, Jeroen; Wittkampf, Fred H; Molenaar, I Quintus
2017-01-01
Irreversible electroporation (IRE) with needle electrodes is being explored as treatment option in locally advanced pancreatic cancer. Several studies have shown promising results with IRE needles, positioned around the tumor to achieve tumor ablation. Disadvantages are the technical difficulties for needle placement, the time needed to achieve tumor ablation, the risk of needle track seeding and most important the possible occurrence of postoperative pancreatic fistula via the needle tracks. The aim of this experimental study was to evaluate the feasibility of a new IRE-technique using two parallel plate electrodes, in a porcine model. Twelve healthy pigs underwent laparotomy. The pancreas was mobilized to enable positioning of the paddles. A standard monophasic external cardiac defibrillator was used to perform an ablation in 3 separate parts of the pancreas; either a single application of 50 or 100J or a serial application of 4x50J. After 6 hours, pancreatectomy was performed for histology and pigs were terminated. Histology showed necrosis of pancreatic parenchyma with neutrophil influx in 5/12, 11/12 and 12/12 of the ablated areas at 50, 100, and 4x50J respectively. The electric current density threshold to achieve necrosis was 4.3, 5.1 and 3.4 A/cm2 respectively. The ablation threshold was significantly lower for the serial compared to the single applications (p = 0.003). The content of the ablated areas differed between the applications: areas treated with a single application of 50 J often contained vital areas without obvious necrosis, whereas half of the sections treated with 100 J showed small islands of normal looking cells surrounded by necrosis, while all sections receiving 4x 50 J showed a homogeneous necrotic lesion. Pancreatic tissue can be successfully ablated using two parallel paddles around the tissue. A serial application of 4x50J was most effective in creating a homogeneous necrotic lesion.
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun
2016-10-01
The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.
Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.
1996-04-01
This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In
Mie, Masayasu; Thuy, Ngo Phan Bich; Kobatake, Eiry
2012-03-07
A homogeneous immunoassay system was developed using fragmented Renilla luciferase (Rluc). The B domain of protein A was fused to two Rluc fragments. When complexes between an antibody and fragmented Rluc fusion proteins bind to target molecules, the Rluc fragments come into close proximity and the luminescence activity of fragmented Rluc is restored by complementation. As proof-of-principle, this fragmented Rluc system was used to detect E. coli homogeneously using an anti-E. coli antibody.
Sannicolo, Thomas; Charvin, Nicolas; Flandin, Lionel; Kraus, Silas; Papanastasiou, Dorina T; Celle, Caroline; Simonato, Jean-Pierre; Muñoz-Rojas, David; Jiménez, Carmen; Bellet, Daniel
2018-05-22
Electrical stability and homogeneity of silver nanowire (AgNW) networks are critical assets for increasing their robustness and reliability when integrated as transparent electrodes in devices. Our ability to distinguish defects, inhomogeneities, or inactive areas at the scale of the entire network is therefore a critical issue. We propose one-probe electrical mapping (1P-mapping) as a specific simple tool to study the electrical distribution in these discrete structures. 1P-mapping has allowed us to show that the tortuosity of the voltage equipotential lines of AgNW networks under bias decreases with increasing network density, leading to a better electrical homogeneity. The impact of the network fabrication technique on the electrical homogeneity of the resulting electrode has also been investigated. Then, by combining 1P-mapping with electrical resistance measurements and IR thermography, we propose a comprehensive analysis of the evolution of the electrical distribution in AgNW networks when subjected to increasing voltage stresses. We show that AgNW networks experience three distinctive stages: optimization, degradation, and breakdown. We also demonstrate that the failure dynamics of AgNW networks at high voltages occurs through a highly correlated and spatially localized mechanism. In particular the in situ formation of cracks could be clearly visualized. It consists of two steps: creation of a crack followed by propagation nearly parallel to the equipotential lines. Finally, we show that current can dynamically redistribute during failure, by following partially damaged secondary pathways through the crack.
NASA Astrophysics Data System (ADS)
Agiotis, L.; Theodorakos, I.; Samothrakitis, S.; Papazoglou, S.; Zergioti, I.; Raptis, Y. S.
2016-03-01
Magnetic nanoparticles (MNPs), such as superparamagnetic iron oxide nanoparticles (SPIONS), have attracted major interest, due to their small size and unique magnetic properties, for drug delivery applications. In this context, iron oxide nanoparticles of magnetite (Fe3O4) (150 nm magnetic core diameter), were used as drug carriers, aiming to form a magnetically controlled nano-platform. The navigation capabilities of the iron oxide nanoparticles in a microfluidic channel were investigated by simulating the magnetic field and the magnetic force applied on the magnetic nanoparticles inside a microfluidic chip. The simulations have been performed using finite element method (ANSY'S software). The optimum setup which intends to simulate the magnetic navigation of the nanoparticles, by the use of MRI-type fields, in the human circulatory system, consists of two parallel permanent magnets to produce a homogeneous magnetic field, in order to ensure the maximum magnetization of the magnetic nanoparticles, an electromagnet for the induction of the magnetic gradients and the creation of the magnetic force and a microfluidic setup so as to simulate the blood flow inside the human blood vessels. The magnetization of the superparamagnetic nanoparticles and the consequent magnetic torque developed by the two permanent magnets, together with the mutual interactions between the magnetized nanoparticles lead to the creation of rhabdoid aggregates in the direction of the homogeneous field. Additionally, the magnetic gradients introduced by the operation of the electromagnet are capable of directing the aggregates, as a whole, to the desired direction. By removing the magnetic fields, the aggregates are disrupted, due to the super paramagnetic nature of the nanoparticles, avoiding thus the formation of undesired thrombosis.
Self-leveling 2D DPN probe arrays
NASA Astrophysics Data System (ADS)
Haaheim, Jason R.; Val, Vadim; Solheim, Ed; Bussan, John; Fragala, J.; Nelson, Mike
2010-02-01
Dip Pen Nanolithography® (DPN®) is a direct write scanning probe-based technique which operates under ambient conditions, making it suitable to deposit a wide range of biological and inorganic materials. Precision nanoscale deposition is a fundamental requirement to advance nanoscale technology in commercial applications, and tailoring chemical composition and surface structure on the sub-100 nm scale benefits researchers in areas ranging from cell adhesion to cell-signaling and biomimetic membranes. These capabilities naturally suggest a "Desktop Nanofab" concept - a turnkey system that allows a non-expert user to rapidly create high resolution, scalable nanostructures drawing upon well-characterized ink and substrate pairings. In turn, this system is fundamentally supported by a portfolio of MEMS devices tailored for microfluidic ink delivery, directed placement of nanoscale materials, and cm2 tip arrays for high-throughput nanofabrication. Massively parallel two-dimensional nanopatterning is now commercially available via NanoInk's 2D nano PrintArray™, making DPN a high-throughput (>3×107 μm2 per hour), flexible and versatile method for precision nanoscale pattern formation. However, cm2 arrays of nanoscopic tips introduce the nontrivial problem of getting them all evenly touching the surface to ensure homogeneous deposition; this requires extremely precise leveling of the array. Herein, we describe how we have made the process simple by way of a selfleveling gimbal attachment, coupled with semi-automated software leveling routines which bring the cm^2 chip to within 0.002 degrees of co-planarity. This excellent co-planarity yields highly homogeneous features across a square centimeter, with <6% feature size standard deviation. We have engineered the devices to be easy to use, wire-free, and fully integrated with both of our patterning tools: the DPN 5000, and the NLP 2000.
Analysis of an integrated 8-channel Tx/Rx body array for use as a body coil in 7-Tesla MRI
NASA Astrophysics Data System (ADS)
Orzada, Stephan; Bitz, Andreas K.; Johst, Sören; Gratz, Marcel; Völker, Maximilian N.; Kraff, Oliver; Abuelhaija, Ashraf; Fiedler, Thomas M.; Solbach, Klaus; Quick, Harald H.; Ladd, Mark E.
2017-06-01
Object In this work an 8-channel array integrated into the gap between the gradient coil and bore liner of a 7-Tesla whole-body magnet is presented that would allow a workflow closer to that of systems at lower magnetic fields that have a built-in body coil; this integrated coil is compared to a local 8-channel array built from identical elements placed directly on the patient. Materials and Methods SAR efficiency and the homogeneity of the right-rotating B1 field component (B_1^+) are investigated numerically and compared to the local array. Power efficiency measurements are performed in the MRI System. First in vivo gradient echo images are acquired with the integrated array. Results While the remote array shows a slightly better performance in terms of B_1^+ homogeneity, the power efficiency and the SAR efficiency are inferior to those of the local array: the transmit voltage has to be increased by a factor of 3.15 to achieve equal flip angles in a central axial slice. The g-factor calculations show a better parallel imaging g-factor for the local array. The field of view of the integrated array is larger than that of the local array. First in vivo images with the integrated array look subjectively promising. Conclusion Although some RF performance parameters of the integrated array are inferior to a tight-fitting local array, these disadvantages might be compensated by the use of amplifiers with higher power and the use of local receive arrays. In addition, the distant placement provides the potential to include more elements in the array design.
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
Effect of illumination on the dielectrical properties of P3HT:PC70BM nanocomposites
NASA Astrophysics Data System (ADS)
Hamza, Saidi; Mhamdi, Asya; Aloui, Walid; Bouazizi, Abdelaziz; Khirouni, Kamel
2017-05-01
In this work, the effects of light-generated carriers on the dielectric properties of the structure ITO/PEDOT: PSS/P3HT:PC70BM/Al were carried out. Impedance spectroscopy was performed at an applied bias equal to the open-circuit. From the real and imaginary part of the impedance, a dipolar relaxation type was observed, which decreased in the presence of light due to an increase in the electron mobility. The Cole-Cole diagram fit using a parallel model R-CPE equivalent circuit leads to the comparison of parallel resistances (R p) and capacitance (CPE) in dark and under illumination. The decrease of R p is related to the increases in the photo-generated charge carrier density. The increase in the capacitance is related to the enhancement of the P3HT/PCBM interface homogeneity.
NASA Astrophysics Data System (ADS)
Simioni, M.; Bedin, L. R.; Aparicio, A.; Piotto, G.; Milone, A. P.; Nardiello, D.; Anderson, J.; Bellini, A.; Brown, T. M.; Cassisi, S.; Cunial, A.; Granata, V.; Ortolani, S.; van der Marel, R. P.; Vesperini, E.
2018-05-01
As part of the Hubble Space Telescope UV Legacy Survey of Galactic globular clusters, 110 parallel fields were observed with the Wide Field Channel of the Advanced Camera for Surveys, in the outskirts of 48 globular clusters, plus the open cluster NGC 6791. Totalling about 0.3 deg2 of observed sky, this is the largest homogeneous Hubble Space Telescope photometric survey of Galalctic globular clusters outskirts to date. In particular, two distinct pointings have been obtained for each target on average, all centred at about 6.5 arcmin from the cluster centre, thus covering a mean area of about 23 arcmin2 for each globular cluster. For each field, at least one exposure in both F475W and F814W filters was collected. In this work, we publicly release the astrometric and photometric catalogues and the astrometrized atlases for each of these fields.
On the stability of nongyrotropic ion populations - A first (analytic and simulation) assessment
NASA Technical Reports Server (NTRS)
Brinca, A. L.; Borda De Agua, L.; Winske, D.
1993-01-01
The wave and dispersion equations for perturbations propagating parallel to an ambient magnetic field in magnetoplasmas with nongyrotropic ion populations show, in general, the occurrence of coupling between the parallel (left- and right-hand circularly polarized electromagnetic and longitudinal electrostatic) eigenmodes of the associated gyrotropic medium. These interactions provide a means to driving linearly one mode with free-energy sources of other modes in homogeneous media. Different types of nongyrotropy bring about distinct classes of coupling. The stability of a hydrogen magnetoplasma with anisotropic, nongyrotropic protons that only couple the electromagnetic modes to each other is investigated analytically (via solution of the derived dispersion equation) and numerically (via simulation with a hybrid code). Nongyrotropy enhances growth and enlarges the unstable spectral range relative to the corresponding gyrotropic situation. The relevance of the properties of nongyrotropic populations to space plasma environments is also discussed.
McElcheran, Clare E.; Yang, Benson; Anderson, Kevan J. T.; Golenstani-Rad, Laleh; Graham, Simon J.
2015-01-01
Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise. PMID:26237218
McElcheran, Clare E; Yang, Benson; Anderson, Kevan J T; Golenstani-Rad, Laleh; Graham, Simon J
2015-01-01
Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise.
SU-E-T-765: Treatment Planning Comparison of SFUD Proton and 4Ï€ Radiotherapy for Prostate Cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, A; Woods, K; Yu, V
2015-06-15
Purpose: Single-Field Uniform Dose (SFUD) proton scanning beams and non-coplanar 4π intensity-modulated radiation therapy (IMRT) represent the most advanced treatment methods based on heavy ion and X-rays, respectively. Here we compare their performance for prostate treatment. Methods: Five prostate patients were planned using 4π radiotherapy and SFUD to an initial dose of 54Gy to a planning target volume (PTV) that encompassed the prostate and seminal vesicles, then a boost prescription dose of 25.2Gy to the prostate for a total dose of 79.2 Gy. 4π plans were created by inversely selecting and optimizing 30 beams from 1162 candidate non-coplanar beams usingmore » a greedy column generation algorithm. The SFUD plans utilized two coplanar, parallel-opposing lateral scanning beams. The SFUD plan PTV was modified to account for range uncertainties while keeping an evaluation PTV identical to that of the X-ray plans for comparison. PTV doses, bladder and rectum dose volumes (V40, V45, V60, V70, V75.6, and V80), R50, and PTV homogeneity index (D95/D5) were evaluated. Results: Compared to SFUD, 4π resulted in 6.8% lower high dose spillage as indicated by R50. Bladder and rectum mean doses were 38.3% and 28.2% lower for SFUD, respectively. However, bladder and rectum volumes receiving >70Gy were 13.1% and 12% greater using proton SFUD. Due to the parallel-opposing beam arrangement, SFUD resulted in greater femoral head (87.8%) and penile bulb doses (43.7%). 4π PTV doses were slightly more homogeneous (HI 0.99 vs. 0.98) than the SFUD dose. Conclusion: Proton is physically advantageous to reduce the irradiated normal volume and mean doses to the rectum and bladder but it is also limited in the beam orientations and entrance dose, which resulted in greater doses to the femoral heads and penile bulb, and larger volumes of rectum and bladder exposed to high dose due to the required robust PTV definition. This project is supported by Varian Medical Systems.« less
Wang, Haipeng; Qiu, Liyun; Wang, Guangbin; Gao, Fei; Jia, Haipeng; Zhao, Junyu; Chen, Weibo; Wang, Cuiyan; Zhao, Bin
2017-06-01
The cardiac magnetic resonance (CMR) of children at 3.0 T presents a unique set of technical challenges because of their small cardiac anatomical structures, fast heart rates, and the limited ability to keep motionless and hold breathe, which could cause problems associated with field inhomogeneity and degrade the image quality. The aim of our study was to evaluate the effect of dual-source parallel radiofrequency (RF) transmission on the B1 homogeneity and image quality in children with CMR at 3.0 T. The study was approved by the institutional ethics committee and written informed consent was obtained. A total of 30 free-breathing children and 30 breath-hold children performed CMR examinations with dual-source and single-source RF transmission. The B1 homogeneity, contrast ratio (CR) of cine images, and off-resonance artifacts in cine images between dual-source and single-source RF transmission were assessed in free-breathing and breath-hold groups, respectively. In both free-breathing and breath-hold groups, higher mean percentage of flip angle (free-breathing group: 104.2 ± 4.6 vs 95.5 ± 6.3, P < .001; breath-hold group: 101.5 ± 5.1 vs 92.5 ± 6.3, P < .001) and lower coefficient of variation (free-breathing group: 0.06 ± 0.02 vs 0.09 ± 0.03, P < .001; breath-hold group: 0.07 ± 0.03 vs 0.10 ± 0.04, P = .005) were found with dual-source than with single-source RF transmission. Both the CRs in the horizontal long axis (HLA) and short axis of cine images with dual-source RF transmission was improved (P < .05 for all). The scores of off-resonance artifacts in the HLA with dual-source RF transmission were higher in both free-breathing and breath-hold groups (P < .05 for all), with substantial interreader agreement (kappa values from 0.68 to 0.74). Compared with conventional single-source, dual-source parallel RF transmission could significantly improve the B1 homogeneity and image quality for CMR in children at 3.0 T. This technology could be taken into account in CMR for children with cardiac diseases.
Huang, Jianhua
2012-07-01
There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
Implementation and performance of parallel Prolog interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, S.; Kale, L.V.; Balkrishna, R.
1988-01-01
In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.
Is the global mean temperature trend too low?
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf
2015-04-01
The global mean temperature trend may be biased due to similar technological and economic developments worldwide. In this study we want to present a number of recent results that suggest that the global mean temperature trend might be steeper as generally thought. In the Global Historical Climate Network version 3 (GHCNv3) the global land surface temperature is estimated to have increased by about 0.8°C between 1880 and 2012. In the raw temperature record, the increase is 0.6°C; the 0.2°C difference is due to homogenization adjustments. Given that homogenization can only reduce biases, this 0.2°C stems from a partial correction of bias errors and it seems likely that the real non-climatic trend bias will be larger. Especially in regions with sparser networks, homogenization will not be able to improve the trend much. Thus if the trend bias in these regions is similar to the bias for more dense networks (industrialized countries), one would expect the real bias to be larger. Stations in sparse networks are representative for a larger region and are given more weight in the computation of the global mean temperature. If all stations are given equal weight, the homogenization adjustments of the GHCNv3 dataset are about 0.4°C per century. In the subdaily HadISH dataset one break with mean size 0.12°C is found every 15 years for the period 1973-2013. That would be a trend bias of 0.78°C per century on a station by station basis. Unfortunately, these estimates strongly focus on Western countries having more stations. It is known from the literature that rich countries have a (statistically insignificant) stronger trend in the global datasets. Regional datasets can be better homogenized than global ones, the main reason being that global datasets do not contain all stations known to the weather services. Furthermore, global datasets use automatic homogenization methods and have less or no metadata. Thus while regional data can be biased themselves, comparing them with global datasets can provide some indication on biases. Compared to the global BEST dataset for the same countries, the national datasets of Austria, Italy and Switzerland have a 0.36°C per century stronger trend since 1901. For the trend since 1960 we can also take Australia, France and Slovenia into account and find a trend bias of 0.40°C per century. Relative to CRUCY the trend biases are smaller and only statistically significant for the period since 1980. The most direct way to study biases in the temperature records is by making parallel measurements with historical measurement set-ups. Several recent parallel data studies for the transition to Stevenson screens suggest larger biases: Austria 0.2°C, Spain 0.5 & 0.6°C. As well as older tropical ones: India 0.42°C and Sri Lanka 0.37°C. The smaller values from the Parker (1994) review mainly stem from parallel measurements from North-West Europe, which may have less problems with exposure. Furthermore, the influence of many historical transitions, especially the ones that could cause an artificial smaller trend, have not been studied in detail yet. We urgently need to study improvements of exposure (especially in the (sub-)tropics), increases in watering and irrigation, mechanical ventilation, better paints, relocations to airports, and relocations to suburbs of stations that started in the cities and from village centers to pasture, for example. Our current understanding surprisingly suggests that the more recent period may have the largest biases, but it could also be that even the best datasets are unable to improve earlier data sufficiently. If the temperature trend were actually larger it would reduce discrepancies between studies for a number of problems in climatology. For example, the estimates of transient climate sensitivity using instrumental data are lower as the one using climate models, volcanic eruptions or paleo data. Furthermore, several changes observed in the climate system are larger than expected. On the other hand, a large trend in the land surface temperature would make the discrepancy with the tropospheric temperature even larger (radiosondes and satellites) and it would introduce a larger difference between land and sea temperature trends. Concluding, at the moment there is no strong evidence yet that the temperature trend is underestimated. However, we do have a considerable amount of evidence that suggests that there is a moderate, but climatologically important bias that we should study with urgency. As far as we know there are no estimates for the remaining uncertainty in the global mean trend after homogenization. Also studies into the causes of cooling biases are a pressing need. (Many have contributed to this study, but it is not clear at this moment who would be official collaborators; they will be added later.)
Performance of electrolyte measurements assessed by a trueness verification program.
Ge, Menglei; Zhao, Haijian; Yan, Ying; Zhang, Tianjiao; Zeng, Jie; Zhou, Weiyan; Wang, Yufei; Meng, Qinghui; Zhang, Chuanbao
2016-08-01
In this study, we analyzed frozen sera with known commutabilities for standardization of serum electrolyte measurements in China. Fresh frozen sera were sent to 187 clinical laboratories in China for measurement of four electrolytes (sodium, potassium, calcium, and magnesium). Target values were assigned by two reference laboratories. Precision (CV), trueness (bias), and accuracy [total error (TEa)] were used to evaluate measurement performance, and the tolerance limit derived from the biological variation was used as the evaluation criterion. About half of the laboratories used a homogeneous system (same manufacturer for instrument, reagent and calibrator) for calcium and magnesium measurement, and more than 80% of laboratories used a homogeneous system for sodium and potassium measurement. More laboratories met the tolerance limit of imprecision (coefficient of variation [CVa]) than the tolerance limits of trueness (biasa) and TEa. For sodium, calcium, and magnesium, the minimal performance criterion derived from biological variation was used, and the pass rates for total error were approximately equal to the bias (<50%). For potassium, the pass rates for CV and TE were more than 90%. Compared with the non homogeneous system, the homogeneous system was superior for all three quality specifications. The use of commutable proficiency testing/external quality assessment (PT/EQA) samples with values assigned by reference methods can monitor performance and provide reliable data for improving the performance of laboratory electrolyte measurement. The homogeneous systems were superior to the non homogeneous systems, whereas accuracy of assigned values of calibrators and assay stability remained challenges.
NASA Technical Reports Server (NTRS)
Lin, Ruei-Fong; Starr, David OC; DeMott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Einaudi, Franco (Technical Monitor)
2001-01-01
The Cirrus Parcel Model Comparison Project, a project of the GCSS (GEWEX Cloud System Studies) Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase I of the project reported here, simulated cirrus cloud microphysical properties are compared for situations of "warm" (40 C) and "cold" (-60 C) cirrus, both subject to updrafts of 4, 20 and 100 centimeters per second. Five models participated. The various models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or treated separately. Simulations are made including both the homogeneous and heterogeneous ice nucleation mechanisms. A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. To isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process, the heterogeneous nucleation mechanism is disabled for a second parallel set of simulations. Qualitative agreement is found for the homogeneous-nucleation- only simulations, e.g., the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in predicted microphysics. Systematic bias exists between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each approach is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory research. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (20-100 centimeters per second) at -60 C when the commonly invoked equilibrium assumption is lifted. The resulting difference in particle-size- dependent solution concentration of haze particles may significantly affect the ice particle formation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of particle number concentration and ice crystal diffusional growth rate, which is particularly sensitive to the deposition coefficient when ice particles are small, modulates the peak particle formation rate achieved in an air parcel and the duration of the active nucleation time period. The effects of heterogeneous nucleation are most pronounced in weak updraft situations. Vapor competition by the heterogeneously nucleated ice crystals may limit the achieved ice supersaturation and thus suppresses the contribution of homogeneous nucleation. Correspondingly, ice crystal number density is markedly reduced. Definitive laboratory and atmospheric benchmark data are needed for the heterogeneous nucleation process. Inter-model differences are correspondingly greater than in the case of the homogeneous nucleation process acting alone.
Parallel processing and expert systems
NASA Technical Reports Server (NTRS)
Lau, Sonie; Yan, Jerry C.
1991-01-01
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: Different particle scanning beam delivery systems have different delivery accuracies. This study was performed to determine, for our particle treatment system, an appropriate composition (n=FWHM/GS) of spot size(FWHM) and grid size (GS), which can provide homogenous delivered dose distributions for both proton and heavy ion scanning beam radiotherapy. Methods: We analyzed the delivery errors of our beam delivery system using log files from the treatment of 28 patients. We used a homemade program to simulate square fields for different n values with and without considering the delivery errors and analyzed the homogeneity. All spots were located on a rectilinearmore » grid with equal spacing in the × and y directions. After that, we selected 7 energy levels for both proton and carbon ions. For each energy level, we made 6 square field plans with different n values (1, 1.5, 2, 2.5, 3, 3.5). Then we delivered those plans and used films to measure the homogeneity of each field. Results: For program simulation without delivery errors, when n≥1.1 the homogeneity can be within ±3%. For both proton and carbon program simulations with delivery errors and film measurements, the homogeneity can be within ±3% when n≥2.5. Conclusion: For our facility with system errors, the n≥2.5 is appropriate for maintaining homogeneity within ±3%.« less
Transport of cosmic-ray protons in intermittent heliospheric turbulence: Model and simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alouani-Bibi, Fathallah; Le Roux, Jakobus A., E-mail: fb0006@uah.edu
The transport of charged energetic particles in the presence of strong intermittent heliospheric turbulence is computationally analyzed based on known properties of the interplanetary magnetic field and solar wind plasma at 1 astronomical unit. The turbulence is assumed to be static, composite, and quasi-three-dimensional with a varying energy distribution between a one-dimensional Alfvénic (slab) and a structured two-dimensional component. The spatial fluctuations of the turbulent magnetic field are modeled either as homogeneous with a Gaussian probability distribution function (PDF), or as intermittent on large and small scales with a q-Gaussian PDF. Simulations showed that energetic particle diffusion coefficients both parallelmore » and perpendicular to the background magnetic field are significantly affected by intermittency in the turbulence. This effect is especially strong for parallel transport where for large-scale intermittency results show an extended phase of subdiffusive parallel transport during which cross-field transport diffusion dominates. The effects of intermittency are found to depend on particle rigidity and the fraction of slab energy in the turbulence, yielding a perpendicular to parallel mean free path ratio close to 1 for large-scale intermittency. Investigation of higher order transport moments (kurtosis) indicates that non-Gaussian statistical properties of the intermittent turbulent magnetic field are present in the parallel transport, especially for low rigidity particles at all times.« less
Deniz, Cem M; Vaidya, Manushka V; Sodickson, Daniel K; Lattanzi, Riccardo
2016-01-01
We investigated global specific absorption rate (SAR) and radiofrequency (RF) power requirements in parallel transmission as the distance between the transmit coils and the sample was increased. We calculated ultimate intrinsic SAR (UISAR), which depends on object geometry and electrical properties but not on coil design, and we used it as the reference to compare the performance of various transmit arrays. We investigated the case of fixing coil size and increasing the number of coils while moving the array away from the sample, as well as the case of fixing coil number and scaling coil dimensions. We also investigated RF power requirements as a function of lift-off, and tracked local SAR distributions associated with global SAR optima. In all cases, the target excitation profile was achieved and global SAR (as well as associated maximum local SAR) decreased with lift-off, approaching UISAR, which was constant for all lift-offs. We observed a lift-off value that optimizes the balance between global SAR and power losses in coil conductors. We showed that, using parallel transmission, global SAR can decrease at ultra high fields for finite arrays with a sufficient number of transmit elements. For parallel transmission, the distance between coils and object can be optimized to reduce SAR and minimize RF power requirements associated with homogeneous excitation. © 2015 Wiley Periodicals, Inc.
Implementation of logic functions and computations by chemical kinetics
NASA Astrophysics Data System (ADS)
Hjelmfelt, A.; Ross, J.
We review our work on the computational functions of the kinetics of chemical networks. We examine spatially homogeneous networks which are based on prototypical reactions occurring in living cells and show the construction of logic gates and sequential and parallel networks. This work motivates the study of an important biochemical pathway, glycolysis, and we demonstrate that the switch that controls the flux in the direction of glycolysis or gluconeogenesis may be described as a fuzzy AND operator. We also study a spatially inhomogeneous network which shares features of theoretical and biological neural networks.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
Payen, Celia; Di Rienzi, Sara C; Ong, Giang T; Pogachar, Jamie L; Sanchez, Joseph C; Sunshine, Anna B; Raghuraman, M K; Brewer, Bonita J; Dunham, Maitreya J
2014-03-20
Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another.
Payen, Celia; Di Rienzi, Sara C.; Ong, Giang T.; Pogachar, Jamie L.; Sanchez, Joseph C.; Sunshine, Anna B.; Raghuraman, M. K.; Brewer, Bonita J.; Dunham, Maitreya J.
2014-01-01
Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another. PMID:24368781
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.
1995-01-01
High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.
Segmented surface coil resonator for in vivo EPR applications at 1.1GHz.
Petryakov, Sergey; Samouilov, Alexandre; Chzhan-Roytenberg, Michael; Kesselring, Eric; Sun, Ziqi; Zweier, Jay L
2009-05-01
A four-loop segmented surface coil resonator (SSCR) with electronic frequency and coupling adjustments was constructed with 18mm aperture and loading capability suitable for in vivo Electron Paramagnetic Resonance (EPR) spectroscopy and imaging applications at L-band. Increased sample volume and loading capability were achieved by employing a multi-loop three-dimensional surface coil structure. Symmetrical design of the resonator with coupling to each loop resulted in high homogeneity of RF magnetic field. Parallel loops were coupled to the feeder cable via balancing circuitry containing varactor diodes for electronic coupling and tuning over a wide range of loading conditions. Manually adjusted high Q trimmer capacitors were used for initial tuning with subsequent tuning electronically controlled using varactor diodes. This design provides transparency and homogeneity of magnetic field modulation in the sample volume, while matching components are shielded to minimize interference with modulation and ambient RF fields. It can accommodate lossy samples up to 90% of its aperture with high homogeneity of RF and modulation magnetic fields and can function as a surface loop or a slice volume resonator. Along with an outer coaxial NMR surface coil, the SSCR enabled EPR/NMR co-imaging of paramagnetic probes in living rats to a depth of 20mm.
Segmented surface coil resonator for in vivo EPR applications at 1.1 GHz
Petryakov, Sergey; Samouilov, Alexandre; Chzhan-Roytenberg, Michael; Kesselring, Eric; Sun, Ziqi; Zweier, Jay L.
2010-01-01
A four-loop segmented surface coil resonator (SSCR) with electronic frequency and coupling adjustments was constructed with 18 mm aperture and loading capability suitable for in vivo Electron Paramagnetic Resonance (EPR) spectroscopy and imaging applications at L-band. Increased sample volume and loading capability were achieved by employing a multi-loop three-dimensional surface coil structure. Symmetrical design of the resonator with coupling to each loop resulted in high homogeneity of RF magnetic field. Parallel loops were coupled to the feeder cable via balancing circuitry containing varactor diodes for electronic coupling and tuning over a wide range of loading conditions. Manually adjusted high Q trimmer capacitors were used for initial tuning with subsequent tuning electronically controlled using varactor diodes. This design provides transparency and homogeneity of magnetic field modulation in the sample volume, while matching components are shielded to minimize interference with modulation and ambient RF fields. It can accommodate lossy samples up to 90% of its aperture with high homogeneity of RF and modulation magnetic fields and can function as a surface loop or a slice volume resonator. Along with an outer coaxial NMR surface coil, the SSCR enabled EPR/NMR co-imaging of paramagnetic probes in living rats to a depth of 20 mm. PMID:19268615
Nano-catalysts: Bridging the gap between homogeneous and heterogeneous catalysis
Functionalized nanoparticles have emerged as sustainable alternatives to conventional materials, as robust, high-surface-area heterogeneous catalyst supports. We envisioned a catalyst system, which can bridge the homogenous and heterogeneous system. Postsynthetic surface modifica...
Introducing parallelism to histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.
Two-Dimensional Homogeneous Fermi Gases
NASA Astrophysics Data System (ADS)
Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning
2018-02-01
We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.
Storage lipid biosynthesis in microspore-derived Brassica napus embryos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, D.C.; Underhill, E.W.; Weber, N.
1989-04-01
Erucic acid, a fatty acid which is confined to the neutral lipids in developing seed cotyledons or rape, was chosen as a marker to study triacylglycerol (TAG) biosynthesis in a Brassica napus L. cv Reston microspore-derived embryo culture system. Accumulation and changes in acyl composition of TAGs during embryogenesis strongly paralleled that observed during seed development. Homogenates of 29-day cultured embryos were examined for the ability to incorporate erucoyl moieties into storage lipids. In the presence of {sup 14}C erucoyl CoA and various acceptors, including glycerol-3-phosphate (G3P), {sup 14}C erucic acid was rapidly incorporated into the TAG fraction. However, inmore » contrast to studies with {sup 14}C oleoyl CoA, there was no measurable radioactivity in any Kennedy Pathway intermediates or within membrane lipid components. Analysis of the radiolabelled TAG species suggested that erucoyl moieties were incorporated into the sn-3 position by a highly active diacylglyercol acyltransferase.« less
Control and protection system for paralleled modular static inverter-converter systems
NASA Technical Reports Server (NTRS)
Birchenough, A. G.; Gourash, F.
1973-01-01
A control and protection system was developed for use with a paralleled 2.5-kWe-per-module static inverter-converter system. The control and protection system senses internal and external fault parameters such as voltage, frequency, current, and paralleling current unbalance. A logic system controls contactors to isolate defective power conditioners or loads. The system sequences contactor operation to automatically control parallel operation, startup, and fault isolation. Transient overload protection and fault checking sequences are included. The operation and performance of a control and protection system, with detailed circuit descriptions, are presented.
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Qin, Shao-gang; Song, Chun-yan; Jiang, Yun-hong
2014-09-01
With the development of science and technology, photoelectric equipment comprises visible system, infrared system, laser system and so on, integration, information and complication are higher than past. Parallelism and jumpiness of optical axis are important performance of photoelectric equipment,directly affect aim, ranging, orientation and so on. Jumpiness of optical axis directly affect hit precision of accurate point damage weapon, but we lack the facility which is used for testing this performance. In this paper, test system which is used fo testing parallelism and jumpiness of optical axis is devised, accurate aim isn't necessary and data processing are digital in the course of testing parallelism, it can finish directly testing parallelism of multi-axes, aim axis and laser emission axis, parallelism of laser emission axis and laser receiving axis and first acuualizes jumpiness of optical axis of optical sighting device, it's a universal test system.
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
JSD: Parallel Job Accounting on the IBM SP2
NASA Technical Reports Server (NTRS)
Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)
1995-01-01
The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.
Photonic content-addressable memory system that uses a parallel-readout optical disk
NASA Astrophysics Data System (ADS)
Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.
1995-11-01
We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.
RAMA: A file system for massively parallel computers
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1993-01-01
This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
Özkal, Can Burak; Frontistis, Zacharias; Antonopoulou, Maria; Konstantinou, Ioannis; Mantzavinos, Dionissios; Meriç, Süreyya
2017-10-01
Photocatalytic degradation of sulfamethoxazole (SMX) antibiotic has been studied under recycling batch and homogeneous flow conditions in a thin-film coated immobilized system namely parallel-plate (PPL) reactor. Experimentally designed, statistically evaluated with a factorial design (FD) approach with intent to provide a mathematical model takes into account the parameters influencing process performance. Initial antibiotic concentration, UV energy level, irradiated surface area, water matrix (ultrapure and secondary treated wastewater) and time, were defined as model parameters. A full of 2 5 experimental design was consisted of 32 random experiments. PPL reactor test experiments were carried out in order to set boundary levels for hydraulic, volumetric and defined defined process parameters. TTIP based thin-film with polyethylene glycol+TiO 2 additives were fabricated according to pre-described methodology. Antibiotic degradation was monitored by High Performance Liquid Chromatography analysis while the degradation products were specified by LC-TOF-MS analysis. Acute toxicity of untreated and treated SMX solutions was tested by standard Daphnia magna method. Based on the obtained mathematical model, the response of the immobilized PC system is described with a polynomial equation. The statistically significant positive effects are initial SMX concentration, process time and the combined effect of both, while combined effect of water matrix and irradiated surface area displays an adverse effect on the rate of antibiotic degradation by photocatalytic oxidation. Process efficiency and the validity of the acquired mathematical model was also verified for levofloxacin and cefaclor antibiotics. Immobilized PC degradation in PPL reactor configuration was found capable of providing reduced effluent toxicity by simultaneous degradation of SMX parent compound and TBPs. Copyright © 2017. Published by Elsevier B.V.
Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton
2018-03-13
The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.
Collectively loading an application in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.
Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.
Numerical investigation of heat transfer in parallel channels with water at supercritical pressure.
Shitsi, Edward; Kofi Debrah, Seth; Yao Agbodemegbe, Vincent; Ampomah-Amoako, Emmanuel
2017-11-01
Thermal phenomena such as heat transfer enhancement, heat transfer deterioration, and flow instability observed at supercritical pressures as a result of fluid property variations have the potential to affect the safety of design and operation of Supercritical Water-cooled Reactor SCWR, and also challenge the capabilities of both heat transfer correlations and Computational Fluid Dynamics CFD physical models. These phenomena observed at supercritical pressures need to be thoroughly investigated. An experimental study was carried out by Xi to investigate flow instability in parallel channels at supercritical pressures under different mass flow rates, pressures, and axial power shapes. Experimental data on flow instability at inlet of the heated channels were obtained but no heat transfer data along the axial length was obtained. This numerical study used 3D numerical tool STAR-CCM+ to investigate heat transfer at supercritical pressures along the axial lengths of the parallel channels with water ahead of experimental data. Homogeneous axial power shape HAPS was adopted and the heating powers adopted in this work were below the experimental threshold heating powers obtained for HAPS by Xi. The results show that the Fluid Centre-line Temperature FCLT increased linearly below and above the PCT region, but flattened at the PCT region for all the system parameters considered. The inlet temperature, heating power, pressure, gravity and mass flow rate have effects on WT (wall temperature) values in the NHT (normal heat transfer), EHT (enhanced heat transfer), DHT (deteriorated heat transfer) and recovery from DHT regions. While variation of all other system parameters in the EHT and PCT regions showed no significant difference in the WT and FCLT values respectively, the WT and FCLT values respectively increased with pressure in these regions. For most of the system parameters considered, the FCLT and WT values obtained in the two channels were nearly the same. The numerical study was not quantitatively compared with experimental data along the axial lengths of the parallel channels, but it was observed that the numerical tool STAR-CCM+ adopted was able to capture the trends for NHT, EHT, DHT and recovery from DHT regions. The heating powers used for the various simulations were below the experimentally observed threshold heating powers, but heat transfer deterioration HTD was observed, confirming the previous finding that HTD could occur before the occurrence of unstable behavior at supercritical pressures. For purposes of comparing the results of numerical simulations with experimental data, the heat transfer data on temperature oscillations obtained at the outlet of the heated channels and instability boundary results obtained at the inlet of the heated channels were compared. The numerical results obtained quite well agree with the experimental data. This work calls for provision of experimental data on heat transfer in parallel channels at supercritical pressures for validation of similar numerical studies.
Heterogeneity Determination and Purification of Commercial Hen Egg-White Lysozyme
NASA Technical Reports Server (NTRS)
Thomas, B. R.; Vekilov, P. G.; Rosenberger, F.
1998-01-01
Hen egg-white lysozyme (HEWL) is widely used as a model protein, although its purity has not been adequately characterized by modern biochemical techniques. We have identified and quantified the protein heterogeneities in three commercial HEWL preparations by sodium dodecyl sulfate polyacrylamide gel electrophoresis with enhanced silver staining, reversed-phase fast protein liquid chromatography (FPLC) and immunoblotting with comparison to authentic protein standards. Depending on the source, the contaminating proteins totalled 1-6%(w/w) and consisted of ovotransferrin, ovalbumin, HEWL dimers, and polypeptides with approximate M(sub r) of 39 and 18 kDa. Furthermore, we have obtained gram quantities of electrophoretically homogeneous [> 99.9%(w/w)] HEWL by single-step semi-preparative scale cation-exchange FPLC with a yield of about 50%. Parallel studies of crystal growth kinetics, salt repartitioning and crystal perfection with this highly purified material showed fourfold increases in the growth-step velocities and significant enhancement in the structural homogeneity of HEWL crystals.
Adsorption of asymmetric rigid rods or heteronuclear diatomic moleculeson homogeneous surfaces
NASA Astrophysics Data System (ADS)
Engl, W.; Courbin, L.; Panizza, P.
2004-10-01
We treat the adsorption on homogeneous surfaces of asymmetric rigid rods (like for instance heteronuclear diatomic molecules). We show that the n→0 vector spin formalism is well suited to describe such a problem. We establish an isomorphism between the coupling constants of the magnetic Hamiltonian and the adsorption parameters of the rigid rods. By solving this Hamiltonian within a mean-field approximation, we obtain analytical expressions for the densities of the different rod’s configurations, both isotherm and isobar adsorptions curves. The most probable configurations of the molecules (normal or parallel to the surface) which depends on temperature and energy parameters are summarized in a diagram. We derive that the variation of Qv , the heat of adsorption at constant volume, with the temperature is a direct signature of the adsorbed molecules configuration change. We show that this formalism can be generalized to more complicated problems such as for instance the adsorption of symmetric and asymmetric rigid rods mixtures in the presence or not of interactions.
Quasi-heterogeneous efficient 3-D discrete ordinates CANDU calculations using Attila
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preeti, T.; Rulko, R.
2012-07-01
In this paper, 3-D quasi-heterogeneous large scale parallel Attila calculations of a generic CANDU test problem consisting of 42 complete fuel channels and a perpendicular to fuel reactivity device are presented. The solution method is that of discrete ordinates SN and the computational model is quasi-heterogeneous, i.e. fuel bundle is partially homogenized into five homogeneous rings consistently with the DRAGON code model used by the industry for the incremental cross-section generation. In calculations, the HELIOS-generated 45 macroscopic cross-sections library was used. This approach to CANDU calculations has the following advantages: 1) it allows detailed bundle (and eventually channel) power calculationsmore » for each fuel ring in a bundle, 2) it allows the exact reactivity device representation for its precise reactivity worth calculation, and 3) it eliminates the need for incremental cross-sections. Our results are compared to the reference Monte Carlo MCNP solution. In addition, the Attila SN method performance in CANDU calculations characterized by significant up scattering is discussed. (authors)« less
Trapping of diffusing particles by striped cylindrical surfaces. Boundary homogenization approach
Dagdug, Leonardo; Berezhkovskii, Alexander M.; Skvortsov, Alexei T.
2015-01-01
We study trapping of diffusing particles by a cylindrical surface formed by rolling a flat surface, containing alternating absorbing and reflecting stripes, into a tube. For an arbitrary stripe orientation with respect to the tube axis, this problem is intractable analytically because it requires dealing with non-uniform boundary conditions. To bypass this difficulty, we use a boundary homogenization approach which replaces non-uniform boundary conditions on the tube wall by an effective uniform partially absorbing boundary condition with properly chosen effective trapping rate. We demonstrate that the exact solution for the effective trapping rate, known for a flat, striped surface, works very well when this surface is rolled into a cylindrical tube. This is shown for both internal and external problems, where the particles diffuse inside and outside the striped tube, at three orientations of the stripe direction with respect to the tube axis: (a) perpendicular to the axis, (b) parallel to the axis, and (c) at the angle of π/4 to the axis. PMID:26093574
Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers
NASA Astrophysics Data System (ADS)
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
2018-03-01
The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.
Borgoo, Alex; Teale, Andrew M; Tozer, David J
2012-01-21
Correlated electron densities, experimental ionisation potentials, and experimental electron affinities are used to investigate the homogeneity of the exchange-correlation and non-interacting kinetic energy functionals of Kohn-Sham density functional theory under density scaling. Results are presented for atoms and small molecules, paying attention to the influence of the integer discontinuity and the choice of the electron affinity. For the exchange-correlation functional, effective homogeneities are highly system-dependent on either side of the integer discontinuity. By contrast, the average homogeneity-associated with the potential that averages over the discontinuity-is generally close to 4/3 when the discontinuity is computed using positive affinities for systems that do bind an excess electron and negative affinities for those that do not. The proximity to 4/3 becomes increasingly pronounced with increasing atomic number. Evaluating the discontinuity using a zero affinity in systems that do not bind an excess electron instead leads to effective homogeneities on the electron abundant side that are close to 4/3. For the non-interacting kinetic energy functional, the effective homogeneities are less system-dependent and the effect of the integer discontinuity is less pronounced. Average values are uniformly below 5/3. The study provides information that may aid the development of improved exchange-correlation and non-interacting kinetic energy functionals. © 2012 American Institute of Physics
NASA Astrophysics Data System (ADS)
Shi, Wei; Hu, Xiaosong; Jin, Chao; Jiang, Jiuchun; Zhang, Yanru; Yip, Tony
2016-05-01
With the development and popularization of electric vehicles, it is urgent and necessary to develop effective management and diagnosis technology for battery systems. In this work, we design a parallel battery model, according to equivalent circuits of parallel voltage and branch current, to study effects of imbalanced currents on parallel large-format LiFePO4/graphite battery systems. Taking a 60 Ah LiFePO4/graphite battery system manufactured by ATL (Amperex Technology Limited, China) as an example, causes of imbalanced currents in the parallel connection are analyzed using our model, and the associated effect mechanisms on long-term stability of each single battery are examined. Theoretical and experimental results show that continuously increasing imbalanced currents during cycling are mainly responsible for the capacity fade of LiFePO4/graphite parallel batteries. It is thus a good way to avoid fast performance fade of parallel battery systems by suppressing variations of branch currents.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Catalytic combustion of hydrogen-air mixtures in stagnation flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, H.; Libby, P.A.; Williams, F.A.
1993-04-01
The interaction between heterogeneous and homogeneous reactions arising when a mixture of hydrogen and air impinges on a platinum plate at elevated temperature is studied. A reasonably complete description of the kinetic mechanism for homogeneous reactions is employed along with a simplified model for heterogeneous reactions. Four regimes are identified depending on the temperature of the plate, on the rate of strain imposed on the flow adjacent to the plate and on the composition and temperature of the reactant stream: (1) surface reaction alone; (2) surface reaction inhibiting homogeneous reaction; (3) homogeneous reaction inhibiting surface reaction; and (4) homogeneous reactionmore » alone. These regimes are related to those found earlier for other chemical systems and form the basis of future experimental investigation of the chemical system considered in the present study.« less
NASA Astrophysics Data System (ADS)
Sharifzadeh, M.; Hashemabadi, S. H.; Afarideh, H.; Khalafi, H.
2018-02-01
The problem of how to accurately measure multiphase flow in the oil/gas industry remains as an important issue since the early 80 s. Meanwhile, oil-water two-phase flow rate measurement has been regarded as an important issue. Gamma-ray attenuation is one of the most commonly used methods for phase fraction measurement which is entirely dependent on the flow regime variations. The peripheral strategy applied for removing the regime dependency problem, is using a homogenization system as a preconditioning tool, as this research work demonstrates. Here, at first, TPFHL as a two-phase flow homogenizer loop has been introduced and verified by a quantitative assessment. In the wake of this procedure, SEMPF as a static-equivalent multiphase flow with an additional capability for preparing a uniform mixture has been explained. The proposed idea in this system was verified by Monte Carlo simulations. Finally, the different water-gas oil two-phase volume fractions fed to the homogenizer loop and injected into the static-equivalent system. A comparison between performance of these two systems by using gamma-ray attenuation technique, showed not only an extra ability to prepare a homogenized mixture but a remarkably increased measurement accuracy for the static-equivalent system.
Fluctuations of local electric field and dipole moments in water between metal walls.
Takae, Kyohei; Onuki, Akira
2015-10-21
We examine the thermal fluctuations of the local electric field Ek (loc) and the dipole moment μk in liquid water at T = 298 K between metal walls in electric field applied in the perpendicular direction. We use analytic theory and molecular dynamics simulation. In this situation, there is a global electrostatic coupling between the surface charges on the walls and the polarization in the bulk. Then, the correlation function of the polarization density pz(r) along the applied field contains a homogeneous part inversely proportional to the cell volume V. Accounting for the long-range dipolar interaction, we derive the Kirkwood-Fröhlich formula for the polarization fluctuations when the specimen volume v is much smaller than V. However, for not small v/V, the homogeneous part comes into play in dielectric relations. We also calculate the distribution of Ek (loc) in applied field. As a unique feature of water, its magnitude |Ek (loc)| obeys a Gaussian distribution with a large mean value E0 ≅ 17 V/nm, which arises mainly from the surrounding hydrogen-bonded molecules. Since |μk|E0 ∼ 30kBT, μk becomes mostly parallel to Ek (loc). As a result, the orientation distributions of these two vectors nearly coincide, assuming the classical exponential form. In dynamics, the component of μk(t) parallel to Ek (loc)(t) changes on the time scale of the hydrogen bonds ∼5 ps, while its smaller perpendicular component undergoes librational motions on time scales of 0.01 ps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Melchor, Max; Vilella, Laia; López, Núria
2016-04-29
An attractive strategy to improve the performance of water oxidation catalysts would be to anchor a homogeneous molecular catalyst on a heterogeneous solid surface to create a hybrid catalyst. The idea of this combined system is to take advantage of the individual properties of each of the two catalyst components. We use Density Functional Theory to determine the stability and activity of a model hybrid water oxidation catalyst consisting of a dimeric Ir complex attached on the IrO 2(110) surface through two oxygen atoms. We find that homogeneous catalysts can be bound to its matrix oxide without losing significant activity.more » Hence, designing hybrid systems that benefit from both the high tunability of activity of homogeneous catalysts and the stability of heterogeneous systems seems feasible.« less
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
NASA Technical Reports Server (NTRS)
Wang, P.; Li, P.
1998-01-01
A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.
AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System
NASA Astrophysics Data System (ADS)
Wang, R.; Harris, C.; Wicenec, A.
2016-07-01
In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Homogeneity and internal defects detect of infrared Se-based chalcogenide glass
NASA Astrophysics Data System (ADS)
Li, Zupana; Wu, Ligang; Lin, Changgui; Song, Bao'an; Wang, Xunsi; Shen, Xiang; Dai, Shixunb
2011-10-01
Ge-Sb-Se chalcogenide glasses is a kind of excellent infrared optical material, which has been enviromental friendly and widely used in infrared thermal imaging systems. However, due to the opaque feature of Se-based glasses in visible spectral region, it's difficult to measure their homogeneity and internal defect as the common oxide ones. In this study, a measurement was proposed to observe the homogeneity and internal defect of these glasses based on near-IR imaging technique and an effective measurement system was also constructed. The testing result indicated the method can gives the information of homogeneity and internal defect of infrared Se-based chalcogenide glass clearly and intuitionally.
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
Linear and nonlinear spectroscopy from quantum master equations.
Fetherolf, Jonathan H; Berkelbach, Timothy C
2017-12-28
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
Linear and nonlinear spectroscopy from quantum master equations
NASA Astrophysics Data System (ADS)
Fetherolf, Jonathan H.; Berkelbach, Timothy C.
2017-12-01
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
Konduru, Prashanti B; Vaidya, Prakash D; Kenig, Eugeny Y
2010-03-15
N,N-Diethylethanolamine (DEEA) is a very promising absorbent for CO(2) removal from gaseous streams, as it can be prepared from renewable resources. Aqueous mixtures of DEEA and piperazine (PZ) are attractive for the enhancement of CO(2) capture, due to the high CO(2) loading capacity of DEEA and high reactivity of PZ. In the present work, for the first time, the equilibrium and kinetic characteristics of the CO(2) reaction with such mixtures were considered. Kinetic data were obtained experimentally, by using a stirred cell reactor. These data were interpreted using a homogeneous activation mechanism, by which the investigated reaction was considered as a reaction between CO(2) and DEEA in parallel with the reaction of CO(2) with PZ. It is found that, in the studied range of temperatures, 298-308 K, and overall amine concentrations, 2.1-2.5 kmol/m(3), this reaction system belongs to the fast pseudo-first-order reaction regime systems. The second-order rate constant for the CO0 reaction with PZ was determined from the absorption rate measurements in the activated DEEA solutions, and its value at 303 K was found to be 24,450 m(3)/(kmol s).
NASA Astrophysics Data System (ADS)
Abdelmalak, Mansour M.; Geoffroy, Laurent; Angelier, Jacques; Bonin, Bernard; Callot, Jean-Paul; Gelard, Jean-Pierre; Aubourg, Charles
2015-04-01
We characterize and map the stress fields acting during plate breakup along the West Greenland volcanic margin. Interpolated stress fields are based on an inversion of fault-slip data sets and magma-driven fractures, crosscutting mainly an exposed inner seaward-dipping basaltic wedge (i.e., SDRi) segmented along-strike, with differently oriented segments. We identify two distinct tectonic episodes P1 and P2 which are both syn-magmatic and purely extensional. P1 probably acted as early as the Late Palaeocene. This stress field was first homogeneous with the minimum principal stress σ3 trending ~N060E, defining a P1A stage. During development of the SDRi, σ3 locally reoriented to become orthogonal to each margin segment (P1B). P1 is coeval with lithosphere breakup and is associated with an extension orthogonal to the Labrador-Baffin axis, which is inherited from the Mesozoic. The P1 related dykes constitute an homogeneous HKTP (High-K-Ti-P) suite. This suit displays alkaline affinities and is rich in both LILE and HFSE. A regional and radical change of σ3 to a ~NS trend took place during P2. The P1-P2 transition occurred at ~56-54 Ma i.e. during magnetic Chron C24R. P2 is associated with only minor extension and σ3 runs parallel to the North American (NAM)/Greenland kinematic displacement vector. The dykes associated with P2 are quite different and constitute a less homogeneous LKTP (Low-K-Ti-P) suite. This suite is less rich in LILE, yielding poorly fractioned chondrite-normalized REE patterns and HFSE contents similar to E-MORB, with slight U-Th and P positive anomalies. We establish therefore that the minimum horizontal stress σ3 for P1 and P2 is parallel to the relative displacement of Greenland related to NAM but not to its absolute displacement during the Tertiary. Taking into account those results as well as variations in magma chemistry from P1 to P2, we suggest that tectonic stresses at a volcanic margin could arise from the local dynamics of the melting mantle.
Using Parallel Processing for Problem Solving.
1979-12-01
are the basic parallel proces- sing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities...Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context FOM -(over "*’ DD I FON 1473 ’EDITION OF I...arc the basic parallel processing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities. Language
Multi-aperture microoptical system for close-up imaging
NASA Astrophysics Data System (ADS)
Berlich, René; Brückner, Andreas; Leitel, Robert; Oberdörster, Alexander; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Modern applications in biomedical imaging, machine vision and security engineering require close-up optical systems with high resolution. Combined with the need for miniaturization and fast image acquisition of extended object fields, the design and fabrication of respective devices is extremely challenging. Standard commercial imaging solutions rely on bulky setups or depend on scanning techniques in order to meet the stringent requirements. Recently, our group has proposed a novel, multi-aperture approach based on parallel image transfer in order to overcome these constraints. It exploits state of the art microoptical manufacturing techniques on wafer level in order to create a compact, cost-effective system with a large field of view. However, initial prototypes have so far been subject to various limitations regarding their manufacturing, reliability and applicability. In this work, we demonstrate the optical design and fabrication of an advanced system, which overcomes these restrictions. In particular, a revised optical design facilitates a more efficient and economical fabrication process and inherently improves system reliability. An additional customized front side illumination module provides homogeneous white light illumination over the entire field of view while maintaining a high degree of compactness. Moreover, the complete imaging assembly is mounted on a positioning system. In combination with an extended working range, this allows for adjustment of the system's focus location. The final optical design is capable of capturing an object field of 36x24 mm2 with a resolution of 150 cycles/mm. Finally, we present experimental results of the respective prototype that demonstrate its enhanced capabilities.
ERIC Educational Resources Information Center
Clycq, Noel
2017-01-01
Education systems are crucial social and cultural apparatuses. They are designed to homogenize at least to a large extent the discourses and praxis of the citizens of a nation by channelling them as much as possible through a unified educational system. However, in ethnically and culturally diversified societies, these homogenizing social…
Parallel Signal Processing and System Simulation using aCe
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2003-01-01
Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).
Investigation of the line arrangement of 2D resistivity surveys for 3D inversion*
NASA Astrophysics Data System (ADS)
Inoue, Keisuke; Nakazato, Hiroomi; Takeuchi, Mutsuo; Sugimoto, Yoshihiro; Kim, Hee Joon; Yoshisako, Hiroshi; Konno, Michiaki; Shoda, Daisuke
2018-03-01
We have conducted numerical and field experiments to investigate the applicability of electrode configurations and line layouts commonly used for two-dimensional (2D) resistivity surveys to 3D inversion. We examined three kinds of electrode configurations and two types of line arrangements, for 16 resistivity models of a conductive body in a homogeneous half-space. The results of the numerical experiment revealed that the parallel-line arrangement was effective in identifying the approximate location of the conductive body. The orthogonal-line arrangement was optimal for identifying a target body near the line intersection. As a result, we propose that parallel lines are useful to highlight areas of particular interest where further detailed work with an intersecting line could be carried out. In the field experiment, 2D resistivity data were measured on a loam layer with a backfilled pit. The reconstructed resistivity image derived from parallel-line data showed a low-resistivity portion near the backfilled pit. When an orthogonal line was added to the parallel lines, the newly estimated location of the backfilled pit coincided well with the actual location. In a further field application, we collected several 2D resistivity datasets in the Nojima Fault area in Awaji Island. The 3D inversion of these datasets provided a resistivity distribution corresponding to the geological structure. In particular, the Nojima Fault was imaged as the western boundary of a low-resistivity belt, from only two orthogonal lines.
NASA Astrophysics Data System (ADS)
Moreto, Jose; Liu, Xiaofeng
2017-11-01
The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.
Developmental time windows for axon growth influence neuronal network topology.
Lim, Sol; Kaiser, Marcus
2015-04-01
Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.
A homogeneous focusing system for diode lasers and its applications in metal surface modification
NASA Astrophysics Data System (ADS)
Wang, Fei; Zhong, Lijing; Tang, Xiahui; Xu, Chengwen; Wan, Chenhao
2018-06-01
High power diode lasers are applied in many different areas, including surface modification, welding and cutting. It is an important technical trend in laser processing of metals in the future. This paper aims to analyze the impact of the shape and homogeneity of the focal spot of the diode laser on surface modification. A focusing system using the triplet lenses for a direct output diode laser which can be used to eliminate coma aberrations is studied. A rectangular stripe with an aspect ratio from 8:1 to 25:1 is obtained, in which the power is homogeneously distributed along the fast axis, the power is 1117.6 W and the peak power intensity is 1.1587 × 106 W/cm2. This paper also presents a homogeneous focusing system by use of a Fresnel lens, in which the incident beam size is 40 × 40 mm2, the focal length is 380 mm, and the dimension of the obtained focal spot is 2 × 10 mm2. When the divergence angle of the incident light is in the range of 12.5-20 mrad and the pitch is 1 mm, the obtained homogeneity in the focal spot is the optimum (about 95.22%). Experimental results show that the measured focal spot size is 2.04 × 10.39 mm2. This research presents a novel design of homogeneous focusing systems for high power diode lasers.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
Casimir force in O(n) systems with a diffuse interface.
Dantchev, Daniel; Grüneberg, Daniel
2009-04-01
We study the behavior of the Casimir force in O(n) systems with a diffuse interface and slab geometry infinity;{d-1}xL , where 2
Pyroxene Homogenization and the Isotopic Systematics of Eucrites
NASA Technical Reports Server (NTRS)
Nyquist, L. E.; Bogard, D. D.
1996-01-01
The original Mg-Fe zoning of eucritic pyroxenes has in nearly all cases been partly homogenized, an observation that has been combined with other petrographic and compositional criteria to establish a scale of thermal "metamorphism" for eucrites. To evaluate hypotheses explaining development of conditions on the HED parent body (Vesta?) leading to pyroxene homogenization against their chronological implications, it is necessary to know whether pyroxene metamorphism was recorded in the isotopic systems. However, identifying the effects of the thermal metamorphism with specific effects in the isotopic systems has been difficult, due in part to a lack of correlated isotopic and mineralogical studies of the same eucrites. Furthermore, isotopic studies often place high demands on analytical capabilities, resulting in slow growth of the isotopic database. Additionally, some isotopic systems would not respond in a direct and sensitive way to pyroxene homogenization. Nevertheless, sufficient data exist to generalize some observations, and to identify directions of potentially fruitful investigations.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.
Liquid crystalline polymers in good nematic solvents: Free chains, mushrooms, and brushes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, D.R.M.; Halperin, A.
1993-08-02
The swelling of main chain liquid crystalline polymers (LCPs) in good nematic solvents is theoretically studied, focusing on brushes of terminally anchored, grafted LCPs. The analysis is concerned with long LCPs, of length L, with n[sub 0] >> 1 hairpin defects. The extension behavior of the major axis, R[parallel], of these ellipsoidal objects gives rise to an Ising elasticity with a free energy penalty of F[sub el](R[parallel])/kT [approx] n[sub 0] [minus] n[sub 0](1 [minus] R[parallel][sup 2]/L[sup 2])[sup 1/2]. The theory of the extension behavior enables the formulation of a Flory type theory of swelling of isolated LCPs yielding R[parallel] [approx]more » exp(2U[sub h]/5kT)N[sup 3/5] and R [perpendicular] [approx] exp([minus]U[sub h]/10kT)N[sup 3/5], with N the degree of polymerization and U[sub h] the hairpin energy. It also allows the generalization of the Alexander model for polymer brushes to the case of grafted LCPs. The behavior of LCP brushes depends on the alignment imposed by the grafting surface and the liquid crystalline solvent. A tilting phase transition is predicted as the grafting density is increased for a surface imposing homogeneous, parallel anchoring. A related transition is expected upon compression of a brush subject to homeotropic, perpendicular alignment. The effect of magnetic or electric fields on these phase transitions is also studied. The critical magnetic/electric field for the Frederiks transition can be lowered to arbitrarily small values by using surfaces coated by brushes of appropriate density.« less
Partial stabilisation of non-homogeneous bilinear systems
NASA Astrophysics Data System (ADS)
Hamidi, Z.; Ouzahra, M.
2018-06-01
In this work, we study in a Hilbert state space, the partial stabilisation of non-homogeneous bilinear systems using a bounded control. Necessary and sufficient conditions for weak and strong stabilisation are formulated in term of approximate observability like assumptions. Applications to parabolic and hyperbolic equations are presented.
Features of sound propagation through and stability of a finite shear layer
NASA Technical Reports Server (NTRS)
Koutsoyannis, S. P.
1976-01-01
The plane wave propagation, the stability and the rectangular duct mode problems of a compressible inviscid linearly sheared parallel, but otherwise homogeneous flow, are shown to be governed by Whittaker's equation. The exact solutions for the perturbation quantities are essentially Whittaker M-functions. A number of known results are obtained as limiting cases of exact solutions. For the compressible finite thickness shear layer it is shown that no resonances and no critical angles exist for all Mach numbers, frequencies and shear layer velocity profile slopes except in the singular case of the vortex sheet.
Trapping and Injecting Single Domain Walls in Magnetic Wire by Local Fields
NASA Astrophysics Data System (ADS)
Vázquez, Manuel; Basheed, G. A.; Infante, Germán; Del Real, Rafael P.
2012-01-01
A single domain wall (DW) moves at linearly increasing velocity under an increasing homogeneous drive magnetic field. Present experiments show that the DW is braked and finally trapped at a given position when an additional antiparallel local magnetic field is applied. That position and its velocity are further controlled by suitable tuning of the local field. In turn, the parallel local field of small amplitude does not significantly affect the effective wall speed at long distance, although it generates tail-to-tail and head-to-head pairs of walls moving along opposite directions when that field is strong enough.
Darcy Flow in a Wavy Channel Filled with a Porous Medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Donald D; Ogretim, Egemen; Bromhal, Grant S
2013-05-17
Flow in channels bounded by wavy or corrugated walls is of interest in both technological and geological contexts. This paper presents an analytical solution for the steady Darcy flow of an incompressible fluid through a homogeneous, isotropic porous medium filling a channel bounded by symmetric wavy walls. This packed channel may represent an idealized packed fracture, a situation which is of interest as a potential pathway for the leakage of carbon dioxide from a geological sequestration site. The channel walls change from parallel planes, to small amplitude sine waves, to large amplitude nonsinusoidal waves as certain parameters are increased. Themore » direction of gravity is arbitrary. A plot of piezometric head against distance in the direction of mean flow changes from a straight line for parallel planes to a series of steeply sloping sections in the reaches of small aperture alternating with nearly constant sections in the large aperture bulges. Expressions are given for the stream function, specific discharge, piezometric head, and pressure.« less
A 32-channel lattice transmission line array for parallel transmit and receive MRI at 7 tesla.
Adriany, Gregor; Auerbach, Edward J; Snyder, Carl J; Gözübüyük, Ark; Moeller, Steen; Ritter, Johannes; Van de Moortele, Pierre-François; Vaughan, Tommy; Uğurbil, Kâmil
2010-06-01
Transmit and receive RF coil arrays have proven to be particularly beneficial for ultra-high-field MR. Transmit coil arrays enable such techniques as B(1) (+) shimming to substantially improve transmit B(1) homogeneity compared to conventional volume coil designs, and receive coil arrays offer enhanced parallel imaging performance and SNR. Concentric coil arrangements hold promise for developing transceiver arrays incorporating large numbers of coil elements. At magnetic field strengths of 7 tesla and higher where the Larmor frequencies of interest can exceed 300 MHz, the coil array design must also overcome the problem of the coil conductor length approaching the RF wavelength. In this study, a novel concentric arrangement of resonance elements built from capacitively-shortened half-wavelength transmission lines is presented. This approach was utilized to construct an array with whole-brain coverage using 16 transceiver elements and 16 receive-only elements, resulting in a coil with a total of 16 transmit and 32 receive channels. (c) 2010 Wiley-Liss, Inc.
Thermo-elastic wave model of the photothermal and photoacoustic signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meja, P.; Steiger, B.; Delsanto, P.P.
1996-12-31
By means of the thermo-elastic wave equation the dynamical propagation of mechanical stress and temperature can be described and applied to model the photothermal and photoacoustic signal. Analytical solutions exist only in particular cases. Using massively parallel computers it is possible to simulate the photothermal and photoacoustic signal in a most sufficient way. In this paper the method of local interaction simulation approach (LISA) is presented and selected examples of its application are given. The advantages of this method, which is particularly suitable for parallel processing, consist in reduced computation time and simple description of the photoacoustic signal in opticalmore » materials. The present contribution introduces the authors model, the formalism and some results in the 1 D case for homogeneous nonattenuative materials. The photoacoustic wave can be understood as a wave with locally limited displacement. This displacement corresponds to a temperature variation. Both variables are usually measured in photoacoustics and photothermal measurements. Therefore the temperature and displacement dependence on optical, elastic and thermal constants is analysed.« less
EFFECT OF ROENTGEN RADIATION ON $beta$-GLUCURONIDASE IN RAT TESTIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arata, L.; Santoro, R.; Severi, M.A.
1962-04-30
The testes were irradiated with a single 600-r dose and enzyme activity was determined in homogenates of testis, at 10-day intervals, up to the 50th postirradiation day. In comparison with the control value of 47.9 (units/mg fresh tissue), BETA -glucuronidase activity fell to 30.5 by the 10th day, then progressively rose to 78.4, 126.0, 242.0, and 275.0 in the subsequent 10-day periods. A parallel drop, followed by a rise, occurred in total activity of testis. Testicular weight fell, and seminal vesicular weight fell and then rose, during the 50-day period. Thus, the transient sterility and destruction of germinal epithelium inducedmore » by irradiation were reflected by a decrease in BETA - glucuronidase activity, whereas regeneration of this epithelium followed the rise in enzyme activity. Such parallel changes in epithelial function and enzyme activity were previously noted in vitamin E-deficient rats. (H.H.D.)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vordtriede, Paul B.; Yoder, Marilyn D., E-mail: yoderm@umkc.edu
2008-07-01
The acidic polygalacturonase PehA from A. vitis has been crystallized. A molecular-replacement solution indicated a right-handed parallel β-helix fold. Polygalacturonases are pectate-degrading enzymes that belong to glycoside hydrolase family 28 and hydrolyze the α-1,4 glycosidic bond between neighboring galacturonasyl residues of the homogalacturonan substrate. The acidic polygalacturonase PehA from Agrobacterium vitis was overexpressed in Escherichia coli, where it accumulated in the periplasmic fraction. It was purified to homogeneity via a two-step chromatography procedure and crystallized using the hanging-drop vapour-diffusion technique. PehA crystals belonged to space group P2{sub 1}, with unit-cell parameters a = 52.387, b = 62.738, c = 149.165more » Å, β = 89.98°. Crystals diffracted to 1.59 Å resolution and contained two molecules per asymmetric unit. An initial structure determination by molecular replacement indicated a right-handed parallel β-helix fold.« less
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova
2016-01-01
The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.
Cassaignau, Anaïs M E; Launay, Hélène M M; Karyadi, Maria-Evangelia; Wang, Xiaolin; Waudby, Christopher A; Deckert, Annika; Robertson, Amy L; Christodoulou, John; Cabrita, Lisa D
2016-08-01
During biosynthesis on the ribosome, an elongating nascent polypeptide chain can begin to fold, in a process that is central to all living systems. Detailed structural studies of co-translational protein folding are now beginning to emerge; such studies were previously limited, at least in part, by the inherently dynamic nature of emerging nascent chains, which precluded most structural techniques. NMR spectroscopy is able to provide atomic-resolution information for ribosome-nascent chain complexes (RNCs), but it requires large quantities (≥10 mg) of homogeneous, isotopically labeled RNCs. Further challenges include limited sample working concentration and stability of the RNC sample (which contribute to weak NMR signals) and resonance broadening caused by attachment to the large (2.4-MDa) ribosomal complex. Here, we present a strategy to generate isotopically labeled RNCs in Escherichia coli that are suitable for NMR studies. Uniform translational arrest of the nascent chains is achieved using a stalling motif, and isotopically labeled RNCs are produced at high yield using high-cell-density E. coli growth conditions. Homogeneous RNCs are isolated by combining metal affinity chromatography (to isolate ribosome-bound species) with sucrose density centrifugation (to recover intact 70S monosomes). Sensitivity-optimized NMR spectroscopy is then applied to the RNCs, combined with a suite of parallel NMR and biochemical analyses to cross-validate their integrity, including RNC-optimized NMR diffusion measurements to report on ribosome attachment in situ. Comparative NMR studies of RNCs with the analogous isolated proteins permit a high-resolution description of the structure and dynamics of a nascent chain during its progressive biosynthesis on the ribosome.
Kinetics of homogeneous nucleation on many-component systems
NASA Technical Reports Server (NTRS)
Hirschfelder, J. O.
1974-01-01
Reiss's (1950) classical treatment of the kinetics of homogeneous nucleation in a system containing two chemical components is extended to many-component systems. The formulation is analogous to the pseudostationary-state theory of chemical reaction rates, with the free energy as a function of the composition of the embryo taking the place of the potential energy as a function of interatomic distances.
Kinetics of homogeneous nucleation in many component systems
NASA Technical Reports Server (NTRS)
Hirschfelder, J. O.
1974-01-01
Reiss's classical treatment of the kinetics of homogeneous nucleation in a system containing two chemical components is extended to many-component systems. The formulation is analogous to the pseudo-stationary state theory of chemical reaction rates with the free energy as a function of the composition of the embryo taking the place of the potential energy as a function of interatomic distances.
1994-05-01
Open Systems and Contacts ...................... 16 A Ballistic Transport .......................... 17 B Role of the Boundaries and Contacts...15 Other Devices ................................ 90 V Modeling with the Green’s Functions 91 16 Homogeneous, Low-Field Systems .................. 93 A...The Retarded Function ..................... 95 B The "Less-Than" Function ................... 99 17 Homogeneous, High-Field Systems
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Distributed and parallel Ada and the Ada 9X recommendations
NASA Technical Reports Server (NTRS)
Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.
1992-01-01
Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, F. S.
Functionally graded components exhibit spatial variations of mechanical properties in contrast with, and as an alternative to, purely homogeneous components. A large class of graded materials, however, are in fact mostly homogeneous materials with property variations (chemical or mechanical) restricted to a specific area or layer produced by applying for example a coating or by introducing sub-surface residual stresses. However, it is also possible to obtain graded materials with a smooth transition of mechanical properties along the entire component, for example in a 40 mm component. This is possible, for example, by using centrifugal casting technique or incremental melting andmore » solidification technique. In this paper we will study fully metallic functionally graded components with a smooth gradient, focusing on fatigue crack propagation. Fatigue propagation will be assessed in the direction parallel to the gradation (in different homogeneous layers of the functionally graded component) to assess what would be fatigue crack propagation on the direction perpendicular to the gradation. Fatigue crack growth rate (standard mode I fatigue crack growth) will be correlated to the mode I stress intensity factor range. Other mechanical properties of different layers of the component (Young's modulus) will also be considered in this analysis. The effect of residual stresses along the component gradation on crack propagation will also be taken into account. A qualitative analysis of the effects of some important features, present in functionally graded materials, will be made based on the obtained results.« less
Revisiting the homogenization of dammed rivers in the southeastern US
Ryan A. McManamay; Donald J. Orth; Charles A. Dolloff
2012-01-01
For some time, ecologists have attempted to make generalizations concerning how disturbances influence natural ecosystems, especially river systems. The existing literature suggests that dams homogenize the hydrologic variability of rivers. However, this might insinuate that dams affect river systems similarly despite a large gradient in natural hydrologic character....
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping
2015-01-01
Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain.« less
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Barui, Srimanta; Chatterjee, Subhomoy; Mandal, Sourav; Kumar, Alok; Basu, Bikramjit
2017-01-01
The osseointegration of metallic implants depends on an effective balance among designed porosity to facilitate angiogenesis, tissue in-growth and bone-mimicking elastic modulus with good strength properties. While addressing such twin requirements, the present study demonstrates a low temperature additive manufacturing based processing strategy to fabricate Ti-6Al-4V scaffolds with designed porosity using inkjet-based 3D powder printing (3DPP). A novel starch-based aqueous binder was prepared and the physico-chemical parameters such as pH, viscosity, and surface tension were optimized for drop-on-demand (DOD) based thermal inkjet printing. Micro-computed tomography (micro-CT) of sintered scaffolds revealed a 57% total porosity in homogeneously porous scaffold and 45% in the gradient porous scaffold with 99% interconnectivity among the micropores. Under uniaxial compression testing, the strength of homogeneously porous and gradient porous scaffolds were ~47MPa and ~90MPa, respectively. The progressive failure in homogeneously porous scaffold was recorded. In parallel to experimental measurements, finite element (FE) analyses have been performed to study the stress distribution globally and also locally around the designed pores. Consistent with FE analyses, a higher elastic modulus was recorded with gradient porous scaffolds (~3GPa) than the homogenously porous scaffolds (~2GPa). While comparing with the existing literature reports, the present work, for the first time, establishes 'direct powder printing methodology' of Ti-6Al-4V porous scaffolds with biomedically relevant microstructural and mechanical properties. Also, a new FE analysis approach, based on the critical understanding of the porous architecture using micro-CT results, is presented to realistically predict the compression response of porous scaffolds. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Perumalla, Kalyan S; Hirshman, Steven Paul
2013-01-01
Simulations that require solutions of block tridiagonal systems of equations rely on fast parallel solvers for runtime efficiency. Leading parallel solvers that are highly effective for general systems of equations, dense or sparse, are limited in scalability when applied to block tridiagonal systems. This paper presents scalability results as well as detailed analyses of two parallel solvers that exploit the special structure of block tridiagonal matrices to deliver superior performance, often by orders of magnitude. A rigorous analysis of their relative parallel runtimes is shown to reveal the existence of a critical block size that separates the parameter space spannedmore » by the number of block rows, the block size and the processor count, into distinct regions that favor one or the other of the two solvers. Dependence of this critical block size on the above parameters as well as on machine-specific constants is established. These formal insights are supported by empirical results on up to 2,048 cores of a Cray XT4 system. To the best of our knowledge, this is the highest reported scalability for parallel block tridiagonal solvers to date.« less
Distributed parallel messaging for multiprocessor systems
Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka
2013-06-04
A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.
Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems
NASA Astrophysics Data System (ADS)
Endo, Eishin; Toga, Yuta; Sasaki, Munetaka
2015-07-01
We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
11-kW direct diode laser system with homogenized 55 × 20 mm2 Top-Hat intensity distribution
NASA Astrophysics Data System (ADS)
Köhler, Bernd; Noeske, Axel; Kindervater, Tobias; Wessollek, Armin; Brand, Thomas; Biesenbach, Jens
2007-02-01
In comparison with other laser systems diode lasers are characterized by a unique overall efficiency, a small footprint and high reliability. However, one major drawback of direct diode laser systems is the inhomogeneous intensity distribution in the far field. Furthermore the output power of current commercially available systems is limited to about 6 kW. We report on a diode laser system with 11 kW output power at a single wavelength of 940 nm aiming for customer specific large area treatment. To the best of our knowledge this is the highest output power reported so far for a direct diode laser system. In addition to the high output power the intensity distribution of the laser beam is homogenized in both axes leading to a 55 x 20 mm2 Top-Hat intensity profile at a working distance of 400 mm. Homogeneity of the intensity distribution is better than 90%. The intensity in the focal plane is 1 kW/cm2. We will present a detailed characterization of the laser system, including measurements of power, power stability and intensity distribution of the homogenized laser beam. In addition we will compare the experimental data with the results of non-sequential raytracing simulations.
Parallel dispatch: a new paradigm of electrical power system dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jun Jason; Wang, Fei-Yue; Wang, Qiang
Modern power systems are evolving into sociotechnical systems with massive complexity, whose real-time operation and dispatch go beyond human capability. Thus, the need for developing and applying new intelligent power system dispatch tools are of great practical significance. In this paper, we introduce the overall business model of power system dispatch, the top level design approach of an intelligent dispatch system, and the parallel intelligent technology with its dispatch applications. We expect that a new dispatch paradigm, namely the parallel dispatch, can be established by incorporating various intelligent technologies, especially the parallel intelligent technology, to enable secure operation of complexmore » power grids, extend system operators U+02BC capabilities, suggest optimal dispatch strategies, and to provide decision-making recommendations according to power system operational goals.« less
NASA Astrophysics Data System (ADS)
Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.
2018-01-01
A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.
Improved model for detection of homogeneous production batches of electronic components
NASA Astrophysics Data System (ADS)
Kazakovtsev, L. A.; Orlov, V. I.; Stashkov, D. V.; Antamoshkin, A. N.; Masich, I. S.
2017-10-01
Supplying the electronic units of the complex technical systems with electronic devices of the proper quality is one of the most important problems for increasing the whole system reliability. Moreover, for reaching the highest reliability of an electronic unit, the electronic devices of the same type must have equal characteristics which assure their coherent operation. The highest homogeneity of the characteristics is reached if the electronic devices are manufactured as a single production batch. Moreover, each production batch must contain homogeneous raw materials. In this paper, we propose an improved model for detecting the homogeneous production batches of shipped lot of electronic components based on implementing the kurtosis criterion for the results of non-destructive testing performed for each lot of electronic devices used in the space industry.
Role of structural barriers for carotenoid bioaccessibility upon high pressure homogenization.
Palmero, Paola; Panozzo, Agnese; Colle, Ines; Chigwedere, Claire; Hendrickx, Marc; Van Loey, Ann
2016-05-15
A specific approach to investigate the effect of high pressure homogenization on the carotenoid bioaccessibility in tomato-based products was developed. Six different tomato-based model systems were reconstituted in order to target the specific role of the natural structural barriers (chromoplast substructure/cell wall) and of the phases (soluble/insoluble) in determining the carotenoid bioaccessibility and viscosity changes upon high pressure homogenization. Results indicated that in the absence of natural structural barriers (carotenoid enriched oil), the soluble and insoluble phases determined the carotenoid bioaccessibility upon processing whereas, in their presence, these barriers governed the bioaccessibility. Furthermore, it was shown that the increment of the viscosity upon high pressure homogenization is determined by the presence of insoluble phase, however, this result was related to the initial ratio of the soluble:insoluble phases in the system. In addition, no relationship between the changes in viscosity and carotenoid bioaccessibility upon high pressure homogenization was found. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
HUNT, DAVID E.
EDUCATIONAL ENVIRONMENTS, HIGHLY STRUCTURED OR UNSTRUCTURED, WERE DIFFERENTIALLY EFFECTIVE WITH STUDENTS OF VARYING PERSONALITIES. THE REPORT CONSIDERED THE UTILITY AND RELEVANCE OF THE CONCEPTUAL SYSTEMS MODEL BY DESCRIBING A SPECIFIC PROJECT IN WHICH THE MODEL SERVED AS THE BASIS FOR FORMING HOMOGENEOUS CLASSROOM GROUPS. THE PROJECT WAS…
Microstructure evaluation for Dy-free Nd-Fe-B sintered magnets with high coercivity
NASA Astrophysics Data System (ADS)
Goto, R.; Matsuura, M.; Sugimoto, S.; Tezuka, N.; Une, Y.; Sagawa, M.
2012-04-01
Nd-Fe-B sintered magnets are used for motors of hybrid or electric vehicles due to their high energy products. Dy is added to Nd-Fe-B sintered magnets to work in a high temperature environment. Although the addition of Dy decreases the magnetization of Nd-Fe-B magnets, it increases coercivity; a decrease in the amount of Dy is strongly required. Recently, Nd-Fe-B sintered magnets with a grain size of 1 μm achieved high coercivity of ˜20 kOe without the addition of Dy or other heavy rare earth elements. In this paper, the microstructure of their magnets was observed and compared to magnets with a grain size of ˜3 μm. The coercivity of magnets consisting of larger particles was 17 kOe. Microstructures were observed by the scanning electron microscope and the shapes of grains and the distribution of the Nd-rich phase were evaluated. The observation was promoted in two directions. One direction is the plane perpendicular to the magnetically aligned direction (c plane side) and the other is the side parallel to the magnetically aligned direction (c axis side). For magnets consisting of smaller particles, the shapes of grains are isotropic for the c plane side and elongated for the c axis side. The angle of minor axis prefers to be parallel to magnetically aligned direction. The distribution of the Nd-rich phase for magnets was also evaluated for both magnets. The distribution of the Nd-rich phase at triple junctions for the magnets with smaller particles becomes homogeneous compared to that for magnets with larger particles. It is considered that Dy-free magnets with high coercivity were realized by the achievement of homogeneous distribution of Nd-rich phase besides decreasing grain size.
Analysis and identification of two reconstituted tobacco sheets by three-level infrared spectroscopy
NASA Astrophysics Data System (ADS)
Wu, Xian-xue; Xu, Chang-hua; Li, Ming; Sun, Su-qin; Li, Jin-ming; Dong, Wei
2014-07-01
Two kinds of reconstituted tobacco (RT) from France (RTF) and China (RTC) were analyzed and identified by a three-level infrared spectroscopy method (Fourier-transform infrared spectroscopy (FT-IR) coupled with second derivative infrared spectroscopy (SD-IR) and two-dimensional infrared correlation spectroscopy (2D-IR)). The conventional IR spectra of RTF parallel samples were more consistent than those of RTC according to their overlapped parallel spectra and IR spectra correlation coefficients. FT-IR spectra of both two RTs were similar in holistic spectral profile except for small differences around 1430 cm-1, indicating that they have similar chemical constituents. By analysis of SD-IR spectra of RTFs and RTCs, more distinct fingerprint features, especially peaks at 1106 (1110), 1054 (1059) and 877 (874) cm-1, were disclosed. Even better reproducibility of five SD-IR spectra of RTF in 1750-1400 cm-1 could be seen intuitively from their stacked spectra and could be confirmed by further similarity evaluation of SD-IR spectra. Existence of calcium carbonate and calcium oxalate could be easily observed in two RTs by comparing their spectra with references. Furthermore, the 2D-IR spectra provided obvious, vivid and intuitive differences of RTF and RTC. Both two RTs had a pair of strong positive auto-peaks in 1600-1400 cm-1. Specifically, the autopeak at 1586 cm-1 in RTF was stronger than the one around 1421 cm-1, whereas the one at 1587 cm-1 in RTC was weaker than that at 1458 cm-1. Consequently, the RTs of two different brands were analyzed and identified thoroughly and RTF had better homogeneity than RTC. As a result, three-level infrared spectroscopy method has proved to be a simple, convenient and efficient method for rapid discrimination and homogeneousness estimation of RT.
Origins and nature of non-Fickian transport through fractures
NASA Astrophysics Data System (ADS)
Wang, L.; Cardenas, M. B.
2014-12-01
Non-Fickian transport occurs across all scales within fractured and porous geological media. Fundamental understanding and appropriate characterization of non-Fickian transport through fractures is critical for understanding and prediction of the fate of solutes and other scalars. We use both analytical and numerical modeling, including direct numerical simulation and particle tracking random walk, to investigate the origin of non-Fickian transport through both homogeneous and heterogeneous fractures. For the simple homogenous fracture case, i.e., parallel plates, we theoretically derived a formula for dynamic longitudinal dispersion (D) within Poiseuille flow. Using the closed-form expression for the theoretical D, we quantified the time (T) and length (L) scales separating preasymptotic and asymptotic dispersive transport, with T and L proportional to aperture (b) of parallel plates to second and fourth orders, respectively. As for heterogeneous fractures, the fracture roughness and correlation length are closely associated with the T and L, and thus indicate the origin for non-Fickian transport. Modeling solute transport through 2D rough-walled fractures with continuous time random walk with truncated power shows that the degree of deviation from Fickian transport is proportional to fracture roughness. The estimated L for 2D rough-walled fractures is significantly longer than that derived from the formula within Poiseuille flow with equivalent b. Moreover, we artificially generated normally distributed 3D fractures with fixed correlation length but different fracture dimensions. Solute transport through 3D fractures was modeled with a particle tracking random walk algorithm. We found that transport transitions from non-Fickian to Fickian with increasing fracture dimensions, where the estimated L for the studied 3D fractures is related to the correlation length.
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
NASA Astrophysics Data System (ADS)
Lin, Ruei-Fong; O'C. Starr, David; Demott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Kärcher, Bernd; Liu, Xiaohong
2002-08-01
The Cirrus Parcel Model Comparison Project, a project of the GCSS [Global Energy and Water Cycle Experiment (GEWEX) Cloud System Studies] Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties from seven models are compared for `warm' (40°C) and `cold' (60°C) cirrus, each subject to updrafts of 0.04, 0.2, and 1 m s1. The models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or the evolution of each individual particle is traced. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms (all-mode simulations). A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. Heterogeneous nucleation is disabled for a second parallel set of simulations in order to isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process. Analysis of these latter simulations is the primary focus of this paper.Qualitative agreement is found for the homogeneous-nucleation-only simulations; for example, the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in the predicted microphysics.Systematic differences exist between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each method is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory research. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (0.2-1 m s1) at 60°C. The equilibrium assumption is commonly invoked in cirrus parcel models. The resulting difference in particle-size-dependent solution concentration of haze particles may significantly affect the ice particle formation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of particle number concentration and ice crystal diffusional growth rate, which is particularly sensitive to the deposition coefficient when ice particles are small, modulates the peak particle formation rate achieved in an air parcel and the duration of the active nucleation time period. The consequent differences in cloud microphysical properties, and thus cloud optical properties, between state-of-the-art models of ice crystal initiation are significant.Intermodel differences in the case of all-mode simulations are correspondingly greater than in the case of homogeneous nucleation acting alone. Definitive laboratory and atmospheric benchmark data are needed to improve the treatment of heterogeneous nucleation processes.
NASA Technical Reports Server (NTRS)
Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)
1992-01-01
A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Characterization of magnetic flux density in passive sources used in magnetic stimulation
NASA Astrophysics Data System (ADS)
Torres, J.; Hincapie, E.; Gilart, F.
2018-03-01
The spatial distribution of the magnetic flux density (B) was determined for the passive sources of magnetic field most used in magnetic stimulation of biological systems, toroidal dipole magnets and cylindrical dipole magnets, in order to find the spatial characteristics of the magnetic field within the volumes of interest for the treatment of biological systems. The perpendicular and parallel components of B regarding the polar surface of the magnets were measured, for which a FW Bell 5180 digital teslameter was used with longitudinal and transverse probes and a two-dimensional positioning system with millimeter scale. It was found that the magnets of this type, which are the most used, present a strong variation of the magnitude and direction of the magnetic flux density for spaces specified in millimeters, reason why the homogeneity of the magnetic field in the regions of interest was found to be relatively low, which makes them elements with a strong applicability for the stimulation of biological systems in which magnetic field gradients up to mT/mm are required in the case of cylindrical magnets, and up to tens of mT/mm in the case of toroidal magnets. Finally, it is concluded that a high percentage of experiments reported in the literature on magnetic treatment of biological systems may be presenting values of B in their doses with deviations of more than 100% of the real value, which raises an incongruence in the cause-effect proposed relation.
Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT
NASA Astrophysics Data System (ADS)
Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao
2015-10-01
Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.
Localized coherence in two interacting populations of social agents
NASA Astrophysics Data System (ADS)
González-Avella, J. C.; Cosenza, M. G.; San Miguel, M.
2014-04-01
We investigate the emergence of localized coherent behavior in systems consisting of two populations of social agents possessing a condition for non-interacting states, mutually coupled through global interaction fields. We employ two examples of such dynamics: (i) Axelrod’s model for social influence, and (ii) a discrete version of a bounded confidence model for opinion formation. In each case, the global interaction fields correspond to the statistical mode of the states of the agents in each population. In both systems we find localized coherent states for some values of parameters, consisting of one population in a homogeneous state and the other in a disordered state. This situation can be considered as a social analogue to a chimera state arising in two interacting populations of oscillators. In addition, other asymptotic collective behaviors appear in both systems depending on parameter values: a common homogeneous state, where both populations reach the same state; different homogeneous states, where both population reach homogeneous states different from each other; and a disordered state, where both populations reach inhomogeneous states.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Work stealing for GPU-accelerated parallel programs in a global address space framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arafat, Humayun; Dinan, James; Krishnamoorthy, Sriram
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a functionmore » of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain« less
Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova
2016-01-01
The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter. PMID:26731550
Lightness of an object under two illumination levels.
Zdravković, Suncica; Economou, Elias; Gilchrist, Alan
2006-01-01
Anchoring theory (Gilchrist et al, 1999 Psychological Review 106 795-834) predicts a wide range of lightness errors, including failures of constancy in multi-illumination scenes and a long list of well-known lightness illusions seen under homogeneous illumination. Lightness values are computed both locally and globally and then averaged together. Local values are computed within a given region of homogeneous illumination. Thus, for an object that extends through two different illumination levels, anchoring theory produces two values, one for the patch in brighter illumination and one for the patch in dimmer illumination. Observers can give matches for these patches separately, but they can also give a single match for the whole object. Anchoring theory in its current form is unable to predict these object matches. We report eight experiments in which we studied the relationship between patch matches and object matches. The results show that the object match represents a compromise between the match for the patch in the field of highest illumination and the patch in the largest field of illumination. These two principles are parallel to the rules found for anchoring lightness: highest luminance rule and area rule.
Macro Scale Independently Homogenized Subcells for Modeling Braided Composites
NASA Technical Reports Server (NTRS)
Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.
2012-01-01
An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.
Algebraic reasoning for the enhancement of data-driven building reconstructions
NASA Astrophysics Data System (ADS)
Meidow, Jochen; Hammer, Horst
2016-04-01
Data-driven approaches for the reconstruction of buildings feature the flexibility needed to capture objects of arbitrary shape. To recognize man-made structures, geometric relations such as orthogonality or parallelism have to be detected. These constraints are typically formulated as sets of multivariate polynomials. For the enforcement of the constraints within an adjustment process, a set of independent and consistent geometric constraints has to be determined. Gröbner bases are an ideal tool to identify such sets exactly. A complete workflow for geometric reasoning is presented to obtain boundary representations of solids based on given point clouds. The constraints are formulated in homogeneous coordinates, which results in simple polynomials suitable for the successful derivation of Gröbner bases for algebraic reasoning. Strategies for the reduction of the algebraical complexity are presented. To enforce the constraints, an adjustment model is introduced, which is able to cope with homogeneous coordinates along with their singular covariance matrices. The feasibility and the potential of the approach are demonstrated by the analysis of a real data set.
Innovative mathematical modeling in environmental remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Gour T.; National Central Univ.; Univ. of Central Florida
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out aremore » used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.« less
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan
1994-01-01
A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
AZTEC. Parallel Iterative method Software for Solving Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.; Shadid, J.; Tuminaro, R.
1995-07-01
AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less
Bit-parallel arithmetic in a massively-parallel associative processor
NASA Technical Reports Server (NTRS)
Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.
1992-01-01
A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
Re-Innovating Recycling for Turbulent Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
Ruan, Joseph; Blanquart, Guillaume
2017-11-01
Historically, turbulent boundary layers along a flat plate have been expensive to simulate numerically, in part due to the difficulty of initializing the inflow with ``realistic'' turbulence, but also due to boundary layer growth. The former has been resolved in several ways, primarily dedicating a region of at least 10 boundary layer thicknesses in width to rescale and recycle flow or by extending the region far enough downstream to allow a laminar flow to develop into turbulence. Both of these methods are relatively costly. We propose a new method to remove the need for an inflow region, thus reducing computational costs significantly. Leveraging the scale similarity of the mean flow profiles, we introduce a coordinate transformation so that the boundary layer problem can be solved as a parallel flow problem with additional source terms. The solutions in the new coordinate system are statistically homogeneous in the downstream direction and so the problem can be solved with periodic boundary conditions. The present study shows the stability of this method, its implementation and its validation for a few laminar and turbulent boundary layer cases.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
NASA Astrophysics Data System (ADS)
Powell, Charles; Jiang, Jing; Walters, Diane; Ediger, Mark
Vapor-deposited glasses are widely investigated for use in organic electronics including the emitting layers of OLED devices. These materials, while macroscopically homogenous, have anisotropic packing and molecular orientation. By controlling this orientation, outcoupling efficiency can be increased by aligning the transition dipole moment of the light-emitting molecules parallel to the substrate. Light-emitting molecules are typically dispersed in a host matrix, as such, it is imperative to understand molecular orientation in two-component systems. In this study we examine two-component vapor-deposited films and the orientations of the constituent molecules using spectroscopic ellipsometry, UV-vis and IR spectroscopy. The role of temperature, composition and molecular shape as it effects molecular orientation is examined for mixtures of DSA-Ph in Alq3 and in TPD. Deposition temperature relative to the glass transition temperature of the two-component mixture is the primary controlling factor for molecular orientation. In mixtures of DSA-Ph in Alq3, the linear DSA-Ph has a horizontal orientation at low temperatures and slight vertical orientation maximized at 0.96Tg,mixture, analogous to one-component films.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
The 'Biologically-Inspired Computing' Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2006-01-01
The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.
PRAIS: Distributed, real-time knowledge-based systems made easy
NASA Technical Reports Server (NTRS)
Goldstein, David G.
1990-01-01
This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.
MARBLE: A system for executing expert systems in parallel
NASA Technical Reports Server (NTRS)
Myers, Leonard; Johnson, Coe; Johnson, Dean
1990-01-01
This paper details the MARBLE 2.0 system which provides a parallel environment for cooperating expert systems. The work has been done in conjunction with the development of an intelligent computer-aided design system, ICADS, by the CAD Research Unit of the Design Institute at California Polytechnic State University. MARBLE (Multiple Accessed Rete Blackboard Linked Experts) is a system of C Language Production Systems (CLIPS) expert system tool. A copied blackboard is used for communication between the shells to establish an architecture which supports cooperating expert systems that execute in parallel. The design of MARBLE is simple, but it provides support for a rich variety of configurations, while making it relatively easy to demonstrate the correctness of its parallel execution features. In its most elementary configuration, individual CLIPS expert systems execute on their own processors and communicate with each other through a modified blackboard. Control of the system as a whole, and specifically of writing to the blackboard is provided by one of the CLIPS expert systems, an expert control system.
Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.
2003-01-01
Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.
Methods for design and evaluation of parallel computating systems (The PISCES project)
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.; Wise, Robert; Haught, Mary JO
1989-01-01
The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.
Karasick, Michael S.; Strip, David R.
1996-01-01
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.
All-memristive neuromorphic computing with level-tuned neurons
NASA Astrophysics Data System (ADS)
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-01
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
All-memristive neuromorphic computing with level-tuned neurons.
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-02
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
NASA Astrophysics Data System (ADS)
Nelson, Chris; Anna, Shelley
2013-11-01
Droplet-based strategies for fluid manipulation have seen significant application in microfluidics due to their ability to compartmentalize solutions and facilitate highly parallelized reactions. Functioning as micro-scale reaction vessels, droplets have been used to study protein crystallization, enzyme kinetics, and to encapsulate whole cells. Recently, the mass transport out of droplets has been used to concentrate solutions and induce phase transitions. Here, we show that droplets trapped in a microfluidic array will spontaneously dehydrate over the course of several hours. By loading these devices with an initially dilute aqueous polymer solution, we use this slow dehydration to observe phase transitions and the evolution of droplet morphology in hundreds of droplets simultaneously. As an example, we trap and dehydrate droplets of a model aqueous two-phase system consisting of polyethylene glycol and dextran. Initially the drops are homogenous, then after some time the polymer concentration reaches a critical point and two phases form. As water continues to leave the system, the drops transition from a microemulsion of DEX in PEG to a core-shell configuration. Eventually, changes in interfacial tension, driven by dehydration, cause the DEX core to completely de-wet from the PEG shell. Since aqueous two phase systems are able to selectively separate a variety of biomolecules, this core shedding behavior has the potential to provide selective, on-chip separation and concentration.
[Intrarenal smooth muscle: histology of a complex urodymamic machine].
Arias, L F; Ortiz-Arango, N
2013-03-01
To know better the microscopic arrangement of the bundles of smooth muscle in the human renal parenchyma, their distribution and anatomical relationships, trying to make a reconstruction of this muscular system. Five adult human kidneys and one fetal kidney were processed "in toto" with cross sections every 300μm. In the histological sections we identify the smooth muscle fibers trying to determine its insertion, course and anatomical relationship with other structures of the kidney tissue. There are bundles of smooth muscle fibers of variable thickness parallel to the edges of the medullary pyramids, bundles that surrounding the medulla in a spiral course, and bundles that accompany arcuate vessels, the latter being the most abundant and easy to identify. These groups of muscle fibers do not have a precise or constant insertion site, their periodicity is not homogeneous and they are not a direct extension of the muscle of the renal pelvis, although some bundles are in contact with it. There are also unusual and inconstant small muscle fibers no associated to vessels in the interstitium of the cortex and, exceptionally, in the medulla. There is a complex microscopic system of smooth muscle fibers that partially surround the renal medulla and are related to renal pelvic muscles without a direct continuity with them. Although this small muscular system is under-recognized, could be very important in urodynamics. Copyright © 2012 AEU. Published by Elsevier Espana. All rights reserved.
PlasmaLab/Eco-Plasma - The future of complex plasma research in space
NASA Astrophysics Data System (ADS)
Knapek, Christina; Thomas, Hubertus; Huber, Peter; Mohr, Daniel; Hagl, Tanja; Konopka, Uwe; Lipaev, Andrey; Morfill, Gregor; Molotkov, Vladimir
The next Russian-German cooperation for the investigation of complex plasmas under microgravity conditions on the International Space Station (ISS) is the PlasmaLab/Eco-Plasma project. Here, a new plasma chamber -- the ``Zyflex'' chamber -- is being developed. The chamber is a cylindrical plasma chamber with parallel electrodes and a flexible system geometry. It is designed to extend the accessible plasma parameter range, i.e. neutral gas pressure, plasma density and electron temperature, and also to allow an independent control of the plasma parameters, therefore increasing the experimental quality and expected knowledge gain significantly. With this system it will be possible to reach low neutral gas pressures (which means weak damping of the particle motion) and to generate large, homogeneous 3D particle systems for studies of fundamental phenomena such as phase transitions, dynamics of liquids or phase separation. The Zyflex chamber has already been operated in several parabolic flight campaigns with different configurations during the last years, yielding a promising outlook for its future development. Here, we will present the current status of the project, the technological advancements the Zyflex chamber will offer compared to its predecessors, and the latest scientific results from experiments on ground and in microgravity conditions during parabolic flights. This work and some of the authors are funded by DLR/BMWi (FKZ 50 WP 0700).
Design considerations for parallel graphics libraries
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1994-01-01
Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.
Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++
NASA Technical Reports Server (NTRS)
Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis
1994-01-01
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.
Framework Nucleic Acids-Enabled Biosensor Development.
Yang, Fan; Li, Qian; Wang, Lihua; Zhang, Guo-Jun; Fan, Chunhai
2018-05-03
Nucleic acids have been actively exploited to develop various exquisite nanostructures due to their unparalleled programmability. Especially, framework nucleic acids (FNAs) with tailorable functionality and precise addressability hold great promise for biomedical applications. In this review, we summarize recent progress of FNA-enabled biosensing in homogeneous solutions, on heterogeneous surfaces and inside cells. We describe the strategies to translate the structural order and rigidity of FNAs to interfacial engineering with high controllability, and approaches to realize multiplexing for highly parallel in-vitro detection. We also envision the marriage of the currently available FNA toolsets with other emerging technologies to develop a new generation of biosensors for precision diagnosis and bioimaging.
Performances of a HGCDTE APD Based Detector with Electric Cooling for 2-μm DIAL/IPDA Applications
NASA Astrophysics Data System (ADS)
Dumas, A.; Rothman, J.; Gibert, F.; Lasfargues, G.; Zanatta, J.-P.; Edouart, D.
2016-06-01
In this work we report on design and testing of an HgCdTe Avalanche Photodiode (APD) detector assembly for lidar applications in the Short Wavelength Infrared Region (SWIR : 1,5 - 2 μm). This detector consists in a set of diodes set in parallel -making a 200 μm large sensitive area- and connected to a custom high gain TransImpedance Amplifier (TIA). A commercial four stages Peltier cooler is used to reach an operating temperature of 185K. Crucial performances for lidar use are investigated : linearity, dynamic range, spatial homogeneity, noise and resistance to intense illumination.
Radiative transfer in spherical shell atmospheres. I - Rayleigh scattering
NASA Technical Reports Server (NTRS)
Adams, C. N.; Kattawar, G. W.
1978-01-01
The plane-parallel approximation and the more realistic spherical shell approximation for the radiance reflected from a planetary atmosphere are compared and are applied to the study of a planet the size of the earth with a homogeneous conservative Rayleigh scattering atmosphere extending to a height of 100 km. Inadequacies of the approximations are considered. Radiance versus height distributions for both single and multiple scattering are presented, as are results for the fractional radiance from altitudes in the atmosphere which contribute to the total unidirectional reflected radiance at the top of the atmosphere. The data can be used for remote sensing applications and planetary spectroscopy.
High-frequency sound waves to eliminate a horizon in the mixmaster universe.
NASA Technical Reports Server (NTRS)
Chitre, D. M.
1972-01-01
From the linear wave equation for small-amplitude sound waves in a curved space-time, there is derived a geodesiclike differential equation for sound rays to describe the motion of wave packets. These equations are applied in the generic, nonrotating, homogeneous closed-model universe (the 'mixmaster universe,' Bianchi type IX). As for light rays described by Doroshkevich and Novikov (DN), these sound rays can circumnavigate the universe near the singularity to remove particle horizons only for a small class of these models and in special directions. Although these results parallel those of DN, different Hamiltonian methods are used for treating the Einstein equations.
Features of sound propagation through and stability of a finite shear layer
NASA Technical Reports Server (NTRS)
Koutsoyannis, S. P.
1977-01-01
The plane wave propagation, the stability, and the rectangular duct mode problems of a compressible, inviscid, linearly sheared, parallel, homogeneous flow are shown to be governed by Whittaker's equation. The exact solutions for the perturbation quantities are essentially the Whittaker M-functions where the nondimensional quantities have precise physical meanings. A number of known results are obtained as limiting cases of the exact solutions. For the compressible finite thickness shear layer it is shown that no resonances and no critical angles exist for all Mach numbers, frequencies, and shear layer velocity profile slopes except in the singular case of the vortex sheet.
Dual-mode switching of a liquid crystal panel for viewing angle control
NASA Astrophysics Data System (ADS)
Baek, Jong-In; Kwon, Yong-Hoan; Kim, Jae Chang; Yoon, Tae-Hoon
2007-03-01
The authors propose a method to control the viewing angle of a liquid crystal (LC) panel using dual-mode switching. To realize both wide viewing angle (WVA) characteristics and narrow viewing angle (NVA) characteristics with a single LC panel, the authors use two different dark states. The LC layer can be aligned homogeneously parallel to the transmission axis of the bottom polarizer for WVA dark state operation, while it can be aligned vertically for NVA dark state operation. The authors demonstrated that viewing angle control can be achieved with a single panel without any loss of contrast at the front.
NDL-v2.0: A new version of the numerical differentiation library for parallel architectures
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.
2014-07-01
We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.
Effects of pore volume-transmissivity correlation on transport phenomena.
Lunati, Ivan; Kinzelbach, Wolfgang; Sørensen, Ivan
2003-12-01
The relevant velocity that describes transport phenomena in a porous medium is the pore velocity. For this reason, one needs not only to describe the variability of transmissivity, which fully determines the Darcy velocity field for given source terms and boundary conditions, but also any variability of the pore volume. We demonstrate that hydraulically equivalent media with exactly the same transmissivity field can produce dramatic differences in the displacement of a solute if they have different pore volume distributions. In particular, we demonstrate that correlation between pore volume and transmissivity leads to a much smoother and more homogeneous solute distribution. This was observed in a laboratory experiment performed in artificial fractures made of two plexiglass plates into which a space-dependent aperture distribution was milled. Using visualization by a light transmission technique, we observe that the solute behaviour is much smoother and more regular after the fractures are filled with glass powder, which plays the role of a homogeneous fault gouge material. This is due to a perfect correlation between pore volume and transmissivity that causes pore velocity to be not directly dependent on the transmissivity, but only indirectly through the hydraulic gradient, which is a much smoother function due to the diffusive behaviour of the flow equation acting as a filter. This smoothing property of the pore volume-transmissivity correlation is also supported by numerical simulations of tracer tests in a dipole flow field. Three different conceptual models are used: an empty fracture, a rough-walled fracture filled with a homogeneous material and a parallel-plate fracture with a heterogeneous fault gouge. All three models are hydraulically equivalent, yet they have a different pore volume distribution. Even if piezometric heads and specific flow rates are exactly the same at any point of the domain, the transport process differs dramatically. These differences make it important to discriminate in situ among different conceptual models in order to simulate correctly the transport phenomena. For this reason, we study the solute breakthrough and recovery curves at the extraction wells. Our numerical case studies show that discrimination on the basis of such data might be impossible except under very favourable conditions, i.e. the integral scale of the transmissivity field has to be known and small compared to the dipole size. If the latter conditions are satisfied, discrimination between the rough-walled fracture filled with a homogeneous material and the other two models becomes possible, whereas the parallel-plate fracture with a heterogeneous fault gouge and the empty fracture still show identifiability problems. The latter may be solved by inspection of aperture and pressure testing.
Architecture studies and system demonstrations for optical parallel processor for AI and NI
NASA Astrophysics Data System (ADS)
Lee, Sing H.
1988-03-01
In solving deterministic AI problems the data search for matching the arguments of a PROLOG expression causes serious bottleneck when implemented sequentially by electronic systems. To overcome this bottleneck we have developed the concepts for an optical expert system based on matrix-algebraic formulation, which will be suitable for parallel optical implementation. The optical AI system based on matrix-algebraic formation will offer distinct advantages for parallel search, adult learning, etc.
Integrating Distributed Homogeneous and Heterogeneous Databases: Prototypes. Volume 3.
1987-12-01
Integrating Distributed3 Institute of Teholg Homogeneous and -Knowledge-Based eeokn usDtb e: Integrated Information Pooye Systems Engineering Pooye (KBIISE...Transportation Systems Center, December 1987 Broadway, NIA 02142 13. NUMBER OF PAGES IT ~ *n~1~ ArFre 218 Pages 14. kW rSi dTfrn front N Gr~in Office) IS...SECURITY CLASS. (of thie report) Transportation Systems Center, Unclassified Broadway, MA 02142 I5a. DECLASSIFICATION/ DOWNGRADING SCHEDULE 16. DISTRIBUTION
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
NASA Astrophysics Data System (ADS)
Paulsen, T.; Wilson, T. J.; Demosthenous, C.; Millan, C.; Jarrard, R. D.; Laufer, A.
2013-12-01
Strain analyses of mechanically twinned calcite in veins and faults hosted by Neogene (13.6 Ma to 4.3 Ma) sedimentary and volcanic rocks recovered within the ANDRILL AND-1B drill core from the Terror Rift in the southern Ross Sea, Antarctica, yield prolate and oblate ellipsoids with principal shortening and extension strains ranging from 0.1% to 8.5%. The majority of samples show homogeneous coaxial strain predominantly characterized by subvertical shortening, which we attribute to lithostatic loading in an Andersonian normal faulting stress regime during sedimentary and ice sheet burial of the stratigraphic sequence. The overall paucity of a non-coaxial layer-parallel shortening signal in the AND-1B twin populations suggests that horizontal compressive stresses predicted by Neogene transtensional kinematic models for the rift system have been absent or of insufficient magnitude to cause a widespread noncoaxial strain overprint. Limited numbers of oriented samples yield a possible average ESE extension direction for the rift that is subparallel to other indicators of Neogene extension. The lack of horizontal shortening in the twin data suggests the Neogene Terror Rift system either lacks a strong longitudinal strike-slip component, or that spatial partitioning of strain controls the maximum shortening axes seen in rocks of this age.
Non-periodic homogenization of 3-D elastic media for the seismic wave equation
NASA Astrophysics Data System (ADS)
Cupillard, Paul; Capdeville, Yann
2018-05-01
Because seismic waves have a limited frequency spectrum, the velocity structure of the Earth that can be extracted from seismic records has a limited resolution. As a consequence, one obtains smooth images from waveform inversion, although the Earth holds discontinuities and small scales of various natures. Within the last decade, the non-periodic homogenization method shed light on how seismic waves interact with small geological heterogeneities and `see' upscaled properties. This theory enables us to compute long-wave equivalent density and elastic coefficients of any media, with no constraint on the size, the shape and the contrast of the heterogeneities. In particular, the homogenization leads to the apparent, structure-induced anisotropy. In this paper, we implement this method in 3-D and show 3-D tests for the very first time. The non-periodic homogenization relies on an asymptotic expansion of the displacement and the stress involved in the elastic wave equation. Limiting ourselves to the order 0, we show that the practical computation of an upscaled elastic tensor basically requires (i) to solve an elastostatic problem and (ii) to low-pass filter the strain and the stress associated with the obtained solution. The elastostatic problem consists in finding the displacements due to local unit strains acting in all directions within the medium to upscale. This is solved using a parallel, highly optimized finite-element code. As for the filtering, we rely on the finite-element quadrature to perform the convolution in the space domain. We end up with an efficient numerical tool that we apply on various 3-D models to test the accuracy and the benefit of the homogenization. In the case of a finely layered model, our method agrees with results derived from Backus. In a more challenging model composed by a million of small cubes, waveforms computed in the homogenized medium fit reference waveforms very well. Both direct phases and complex diffracted waves are accurately retrieved in the upscaled model, although it is smooth. Finally, our upscaling method is applied to a realistic geological model. The obtained homogenized medium holds structure-induced anisotropy. Moreover, full seismic wavefields in this medium can be simulated with a coarse mesh (no matter what the numerical solver is), which significantly reduces computation costs usually associated with discontinuities and small heterogeneities. These three tests show that the non-periodic homogenization is both accurate and tractable in large 3-D cases, which opens the path to the correct account of the effect of small-scale features on seismic wave propagation for various applications and to a deeper understanding of the apparent anisotropy.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
Algorithms and programming tools for image processing on the MPP
NASA Technical Reports Server (NTRS)
Reeves, A. P.
1985-01-01
Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.
Homogeneity of gels and gel-derived glasses
NASA Technical Reports Server (NTRS)
Mukherjee, S. P.
1984-01-01
The significance and implications of gel preparation procedures in controlling the homogeneity of multicomponent oxide gels are discussed. The role of physicochemical factors such as the structure and chemical reactivities of alkoxides, the formation of double-metal alkoxides, and the nature of solvent(s) are critically analyzed in the context of homogeneity of gels during gelation. Three procedures for preparing gels in the SiO2-B2O3-Na2O system are examined in the context of cation distribution. Light scattering results for glasses in the SiO2-B2O3-Na2O system prepared by both the gel technique and the conventional technique are examined.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
Matrix algorithms for solving (in)homogeneous bound state equations
Blank, M.; Krassnigg, A.
2011-01-01
In the functional approach to quantum chromodynamics, the properties of hadronic bound states are accessible via covariant integral equations, e.g. the Bethe–Salpeter equation for mesons. In particular, one has to deal with linear, homogeneous integral equations which, in sophisticated model setups, use numerical representations of the solutions of other integral equations as part of their input. Analogously, inhomogeneous equations can be constructed to obtain off-shell information in addition to bound-state masses and other properties obtained from the covariant analogue to a wave function of the bound state. These can be solved very efficiently using well-known matrix algorithms for eigenvalues (in the homogeneous case) and the solution of linear systems (in the inhomogeneous case). We demonstrate this by solving the homogeneous and inhomogeneous Bethe–Salpeter equations and find, e.g. that for the calculation of the mass spectrum it is as efficient or even advantageous to use the inhomogeneous equation as compared to the homogeneous. This is valuable insight, in particular for the study of baryons in a three-quark setup and more involved systems. PMID:21760640
Pohmann, Rolf; Speck, Oliver; Scheffler, Klaus
2016-02-01
Relaxation times, transmit homogeneity, signal-to-noise ratio (SNR) and parallel imaging g-factor were determined in the human brain at 3T, 7T, and 9.4T, using standard, tight-fitting coil arrays. The same human subjects were scanned at all three field strengths, using identical sequence parameters and similar 31- or 32-channel receive coil arrays. The SNR of three-dimensional (3D) gradient echo images was determined using a multiple replica approach and corrected with measured flip angle and T2 (*) distributions and the T1 of white matter to obtain the intrinsic SNR. The g-factor maps were derived from 3D gradient echo images with several GRAPPA accelerations. As expected, T1 values increased, T2 (*) decreased and the B1 -homogeneity deteriorated with increasing field. The SNR showed a distinctly supralinear increase with field strength by a factor of 3.10 ± 0.20 from 3T to 7T, and 1.76 ± 0.13 from 7T to 9.4T over the entire cerebrum. The g-factors did not show the expected decrease, indicating a dominating role of coil design. In standard experimental conditions, SNR increased supralinearly with field strength (SNR ∼ B0 (1.65) ). To take full advantage of this gain, the deteriorating B1 -homogeneity and the decreasing T2 (*) have to be overcome. © 2015 Wiley Periodicals, Inc.
The Plane-parallel Albedo Bias of Liquid Clouds from MODIS Observations
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Cahalan, Robert F.; Platnick, Steven
2007-01-01
In our most advanced modeling tools for climate change prediction, namely General Circulation Models (GCMs), the schemes used to calculate the budget of solar and thermal radiation commonly assume that clouds are horizontally homogeneous at scales as large as a few hundred kilometers. However, this assumption, used for convenience, computational speed, and lack of knowledge on cloud small scale variability, leads to erroneous estimates of the radiation budget. This paper provides a global picture of the solar radiation errors at scales of approximately 100 km due to warm (liquid phase) clouds only. To achieve this, we use cloud retrievals from the instrument MODIS on the Terra and Aqua satellites, along with atmospheric and surface information, as input into a GCM-style radiative transfer algorithm. Since the MODIS product contains information on cloud variability below 100 km we can run the radiation algorithm both for the variable and the (assumed) homogeneous clouds. The difference between these calculations for reflected or transmitted solar radiation constitutes the bias that GCMs would commit if they were able to perfectly predict the properties of warm clouds, but then assumed they were homogeneous for radiation calculations. We find that the global average of this bias is approx.2-3 times larger in terms of energy than the additional amount of thermal energy that would be trapped if we were to double carbon dioxide from current concentrations. We should therefore make a greater effort to predict horizontal cloud variability in GCMs and account for its effects in radiation calculations.
Torbati, Mohammadali; Farajzadeh, Mir Ali; Torbati, Mostafa; Nabil, Ali Akbar Alizadeh; Mohebbi, Ali; Afshar Mogaddam, Mohammad Reza
2018-01-01
A new microextraction method named salt and pH-induced homogeneous liquid-liquid microextraction has been developed in a home-made extraction device for the extraction and preconcentration of some pyrethroid insecticides from different fruit juice samples prior to gas chromatography-mass spectrometry. In the present work, an extraction device made from two parallel glass tubes with different lengths and diameters was used in the microextraction procedure. In this method, a homogeneous solution of a sample solution and an extraction solvent (pivalic acid) was broken by performing an acid-base reaction and the extraction solvent was produced in whole of the solution. The produced droplets of the extraction solvent went up through the solution and solidified using an ice-bath. They were collected without centrifugation step. Under the optimum conditions, limits of detection and quantification were obtained in the ranges of 0.006-0.038, and 0.023-0.134ngmL -1 , respectively. The enrichment factors and extraction recoveries of the selected analytes ranged from 365-460 to 73-92%, respectively. The relative standard deviations were lower than 9% for intra- (n = 6) and inter-day (n = 4) precisions at a concentration of 1ngmL -1 of each analyte. Finally, some fruit juice samples were effectively analyzed by the proposed method. Copyright © 2017 Elsevier B.V. All rights reserved.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Hinsin, Duangduean; Pdungsap, Laddawan; Shiowatana, Juwadee
2002-12-06
A continuous-flow extraction system originally developed for sequential extraction was applied to study elemental association of a synthetic metal-doped amorphous iron hydroxide phase. The homogeneity and metal association of the precipitates were evaluated by gradual leaching using the system. Leachate was collected in fractions for determination of elemental concentrations. The result obtained as extractograms indicated that the doped metals were adsorbed more on the outermost surface rather than homogeneously distributed in the precipitates. The continuous-flow extraction method was also used for effective removal of surface adsorbed metals to obtain a homogeneous metal-doped synthetic iron hydroxide by a sequential extraction using acetic acid and small volume of hydroxylamine hydrochloride solution. The system not only ensures complete washing, but the extent of metal immobilization in the synthetic iron hydroxide could be determined with high accuracy from the extractograms. The initial metal/iron mole ratio (M/Fe) in solution affected the M/Fe mole ratio in homogeneous doped iron hydroxide phase. The M/Fe mole ratio of metal incorporation was approximately 0.01-0.02 and 0.03-0.06, for initial solution M/Fe mole ratio of 0.025 and 0.100, respectively.
Simple Köhler homogenizers for image-forming solar concentrators
NASA Astrophysics Data System (ADS)
Zhang, Weiya; Winston, Roland
2010-08-01
By adding simple Köhler homogenizers in the form of aspheric lenses generated with an optimization approach, we solve the problems of non-uniform irradiance distribution and non-square irradiance pattern existing in some image-forming solar concentrators. The homogenizers do not require optical bonding to the solar cells or total internal reflection surface. Two examples are shown including a Fresnel lens based concentrator and a two-mirror aplanatic system.
Efficient Implementation of Multigrid Solvers on Message-Passing Parrallel Systems
NASA Technical Reports Server (NTRS)
Lou, John
1994-01-01
We discuss our implementation strategies for finite difference multigrid partial differential equation (PDE) solvers on message-passing systems. Our target parallel architecture is Intel parallel computers: the Delta and Paragon system.
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
GCSS Cirrus Parcel Model Comparison Project
NASA Technical Reports Server (NTRS)
Lin, Ruei-Fong; Starr, David OC.; DeMott, Paul J.; Cotton, Richard; Jensen, Eric; Sassen, Kenneth; Einaudi, Franco (Technical Monitor)
2000-01-01
The Cirrus Parcel Model Comparison Project, a project of GCSS Working Group on Cirrus Cloud Systems (WG2), involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. The goal of this project is to document and understand the factors resulting in significant inter-model differences. The intent is to foment research leading to model improvement and validation. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties are compared for situations of "warm" (-40 C) and "cold" (-60 C) cirrus subject to updrafts of 4, 20 and 100 cm/s, respectively. Five models participated. These models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms. A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. To isolate the treatment of the homogeneous freezing (of haze drops) nucleation process, the heterogeneous nucleation mechanism is disabled for a second parallel set of simulations. Qualitative agreement is found for the homogeneous-nucleation-only simulations, e.g., the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, non-negligible quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation formulation, aerosol size, ice crystal growth rate (particularly the deposition coefficient), and water vapor uptake rate are critical components that lead to differences in predicted microphysics. Systematic bias exists between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each approach is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory data. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (20-100 cm/s) at -60 C when the commonly invoked equilibrium assumption is lifted. The resulting difference in particle-size-dependent solution concentration of haze particles may significantly affect the ice nucleation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of ice number concentration and ice crystal diffusional growth rate, which is sensitive to the deposition coefficient when ice particles are small, partially controls the peak nucleation rate achieved in an air parcel and the duration of the active nucleation time period. The effects of heterogeneous nucleation are most pronounced in weak updraft situations. Vapor competition by the nucleated (heterogeneous) ice crystals limits the achieved ice supersaturation and thus suppresses the contribution of homogeneous nucleation. Correspondingly, ice crystal number density is markedly reduced. Definitive laboratory and atmospheric benchmark data are needed for the heterogeneous nucleation process. Inter-model differences are correspondingly greater than in the case of the homogeneous nucleation process acting alone.
NASA Astrophysics Data System (ADS)
Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri
2017-12-01
Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a building with building parts containing irregular configuration of non-parallel system, make it more rigid by forming a triangle module, and use the formula.A good collaboration is needed between architects and structural experts in creating earthquake architecture.
HOMOGENEOUS CATALYTIC OXIDATION OF HYDROCARBONS IN ALTERNATIVE SOLVENTS
Homogeneous Catalytic Oxidations of Hydrocarbons in Alternative Solvent Systems
Michael A. Gonzalez* and Thomas M. Becker, Sustainable Technology Division, Office of Research and Development; United States Environmental Protection Agency, 26 West Martin Luther King Drive, ...
Shalaby, S M; Bosseila, M; Fawzy, M M; Abdel Halim, D M; Sayed, S S; Allam, R S H M
2016-11-01
Morphea is a rare fibrosing skin disorder that occurs as a result of abnormal homogenized collagen synthesis. Fractional ablative laser resurfacing has been used effectively in scar treatment via abnormal collagen degradation and induction of healthy collagen synthesis. Therefore, fractional ablative laser can provide an effective modality in treatment of morphea. The study aimed at evaluating the efficacy of fractional carbon dioxide laser as a new modality for the treatment of localized scleroderma and to compare its results with the well-established method of UVA-1 phototherapy. Seventeen patients with plaque and linear morphea were included in this parallel intra-individual comparative randomized controlled clinical trial. Each with two comparable morphea lesions that were randomly assigned to either 30 sessions of low-dose (30 J/cm 2 ) UVA-1 phototherapy (340-400 nm) or 3 sessions of fractional CO 2 laser (10,600 nm-power 25 W). The response to therapy was then evaluated clinically and histopathologically via validated scoring systems. Immunohistochemical analysis of TGF-ß1 and MMP1 was done. Patient satisfaction was also assessed. Wilcoxon signed rank test for paired (matched) samples and Spearman rank correlation equation were used as indicated. Comparing the two groups, there was an obvious improvement with fractional CO 2 laser that was superior to that of low-dose UVA-1 phototherapy. Statistically, there was a significant difference in the clinical scores (p = 0.001), collagen homogenization scores (p = 0.012), and patient satisfaction scores (p = 0.001). In conclusion, fractional carbon dioxide laser is a promising treatment modality for cases of localized morphea, with proved efficacy of this treatment on clinical and histopathological levels.
Jow, Uei-Ming; Ghovanloo, Maysam
2012-12-21
We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.
Transverse Oscillations of Coronal Loops
NASA Astrophysics Data System (ADS)
Ruderman, Michael S.; Erdélyi, Robert
2009-12-01
On 14 July 1998 TRACE observed transverse oscillations of a coronal loop generated by an external disturbance most probably caused by a solar flare. These oscillations were interpreted as standing fast kink waves in a magnetic flux tube. Firstly, in this review we embark on the discussion of the theory of waves and oscillations in a homogeneous straight magnetic cylinder with the particular emphasis on fast kink waves. Next, we consider the effects of stratification, loop expansion, loop curvature, non-circular cross-section, loop shape and magnetic twist. An important property of observed transverse coronal loop oscillations is their fast damping. We briefly review the different mechanisms suggested for explaining the rapid damping phenomenon. After that we concentrate on damping due to resonant absorption. We describe the latest analytical results obtained with the use of thin transition layer approximation, and then compare these results with numerical findings obtained for arbitrary density variation inside the flux tube. Very often collective oscillations of an array of coronal magnetic loops are observed. It is natural to start studying this phenomenon from the system of two coronal loops. We describe very recent analytical and numerical results of studying collective oscillations of two parallel homogeneous coronal loops. The implication of the theoretical results for coronal seismology is briefly discussed. We describe the estimates of magnetic field magnitude obtained from the observed fundamental frequency of oscillations, and the estimates of the coronal scale height obtained using the simultaneous observations of the fundamental frequency and the frequency of the first overtone of kink oscillations. In the last part of the review we summarise the most outstanding and acute problems in the theory of the coronal loop transverse oscillations.
Visualizing Parallel Computer System Performance
NASA Technical Reports Server (NTRS)
Malony, Allen D.; Reed, Daniel A.
1988-01-01
Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.
NASA Technical Reports Server (NTRS)
Goldstein, David
1991-01-01
Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.
Comparative study on fixation of central venous catheter by suture versus adhesive device.
Molina-Mazón, C S; Martín-Cerezo, X; Domene-Nieves de la Vega, G; Asensio-Flores, S; Adamuz-Tomás, J
2018-03-27
To assess the efficacy of a central venous catheter adhesive fixation device (CVC) to prevent associated complications. To establish the need for dressing changes, number of days' catheterization and reasons for catheter removal in both study groups. To assess the degree of satisfaction of personnel with the adhesive system. A, randomized, prospective and open pilot study, of parallel groups, with comparative evaluation between CVC fixation with suture and with an adhesive safety system. The study was performed in the Coronary Unit of the Universitari de Bellvitge Hospital, between April and November 2016. The population studied were patients with a CVC. The results were analyzed using SPSS Statistics software. The study was approved by the Clinical Research Ethics Committee. 100 patients (47 adhesive system and 53 suture) were analyzed. Both groups were homogeneous in terms of demographic variables, anticoagulation and days of catheterization. The frequency of complications in the adhesive system group was 21.3%, while in the suture group it was 47.2% (P=.01). The suture group had a higher frequency of local signs of infection (p=.006), catheter displacement (p=.005), and catheter-associated bacteraemia (P=.05). The use of adhesive fixation was associated with a lower requirement for dressing changes due to bleeding (P=.006). Ninety-six point seven percent of the staff recommended using the adhesive safety system. The catheters fixed with adhesive systems had fewer infectious complications and less displacement. Copyright © 2018 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.
Mixed Mode Fuel Injector And Injection System
Stewart, Chris Lee; Tian, Ye; Wang, Lifeng; Shafer, Scott F.
2005-12-27
A fuel injector includes a homogenous charge nozzle outlet set and a conventional nozzle outlet set that are controlled respectively by first and second three way needle control valves. Each fuel injector includes first and second concentric needle valve members. One of the needle valve members moves to an open position for a homogenous charge injection event, while the other needle valve member moves to an open position for a conventional injection event. The fuel injector has the ability to operate in a homogenous charge mode with a homogenous charge spray pattern, a conventional mode with a conventional spray pattern or a mixed mode.
Zhang, Bei; Sodickson, Daniel K; Lattanzi, Riccardo; Duan, Qi; Stoeckel, Bernd; Wiggins, Graham C
2012-04-01
In 7 T traveling wave imaging, waveguide modes supported by the scanner radiofrequency shield are used to excite an MR signal in samples or tissue which may be several meters away from the antenna used to drive radiofrequency power into the system. To explore the potential merits of traveling wave excitation for whole-body imaging at 7 T, we compare numerical simulations of traveling wave and TEM systems, and juxtapose full-wave electrodynamic simulations using a human body model with in vivo human traveling wave imaging at multiple stations covering the entire body. The simulated and in vivo traveling wave results correspond well, with strong signal at the periphery of the body and weak signal deep in the torso. These numerical results also illustrate the complicated wave behavior that emerges when a body is present. The TEM resonator simulation allowed comparison of traveling wave excitation with standard quadrature excitation, showing that while the traveling wave B +1 per unit drive voltage is much less than that of the TEM system, the square of the average B +1 compared to peak specific absorption rate (SAR) values can be comparable in certain imaging planes. Both systems produce highly inhomogeneous excitation of MR signal in the torso, suggesting that B(1) shimming or other parallel transmission methods are necessary for 7 T whole body imaging. Copyright © 2011 Wiley-Liss, Inc.
Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes
Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey
2010-10-05
In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.
Deshmukh, Dewal S; Bhanage, Bhalchandra M
2018-06-21
A green and sustainable methodology for the synthesis of isoquinolines using Ru(ii)/PEG-400 as a homogeneous recyclable catalytic system has been demonstrated. N-Tosylhydrazone, a rarely explored directing group, has been successfully employed for an annulation type of reaction with alkynes via C-H/N-N activation. A short reaction time with a simple extraction procedure, a wide substrate scope with high yields of products, easily prepared substrates, biodegradable solvent, and scalability up to the gram level enhance the efficiency and sustainability of the proposed protocol. Further, the expensive ruthenium-based homogeneous catalytic system could be reused up to a fourth consecutive cycle without any loss in its activity.
Production of solid lipid nanoparticles (SLN): scaling up feasibilities.
Dingler, A; Gohla, S
2002-01-01
Solid lipid nanoparticles (SLN/Lipopearls) are widely discussed as a new colloidal drug carrier system. In contrast to polymeric systems, such as Polylactic copolyol microcapsules, these systems show with a good biocompatibility, if applied parenterally. The solid lipid matrices can be comprised of fats or waxes, and allow protection of incorporated active ingredients against chemical and physical degradation. The SLN can either be produced by 'hot homogenization' of melted lipids at elevated temperatures or by a 'cold homogenization' process. This paper deals with production technologies for SLN formulations, based on non-ethoxylated fat components for topical application and high pressure homogenization. Based on the chosen fat components, a novel and easy manufacturing and scaling-up method was developed to maintain chemical and physical integrity of the encapsulated active ingredients in the carrier.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
LDRD final report on massively-parallel linear programming : the parPCx system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less
Methods for operating parallel computing systems employing sequenced communications
Benner, R.E.; Gustafson, J.L.; Montry, G.R.
1999-08-10
A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.
Methods for operating parallel computing systems employing sequenced communications
Benner, Robert E.; Gustafson, John L.; Montry, Gary R.
1999-01-01
A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.
Research on Parallel Three Phase PWM Converters base on RTDS
NASA Astrophysics Data System (ADS)
Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun
2018-01-01
Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.
Oscillatory Dynamics of One-Dimensional Homogeneous Granular Chains
NASA Astrophysics Data System (ADS)
Starosvetsky, Yuli; Jayaprakash, K. R.; Hasan, Md. Arif; Vakakis, Alexander F.
The acoustics of the homogeneous granular chains has been studied extensively both numerically and experimentally in the references cited in the previous chapters. This chapter focuses on the oscillatory behavior of finite dimensional homogeneous granular chains. It is well known that normal vibration modes are the building blocks of the vibrations of linear systems due to the applicability of the principle of superposition. One the other hand, nonlinear theory is deprived of such a general superposition principle (although special cases of nonlinear superpositions do exist), but nonlinear normal modes ‒ NNMs still play an important role in the forced and resonance dynamics of these systems. In their basic definition [1], NNMs were defined as time-periodic nonlinear oscillations of discrete or continuous dynamical systems where all coordinates (degrees-of-freedom) oscillate in-unison with the same frequency; further extensions of this definition have been considered to account for NNMs of systems with internal resonances [2]...
Macke, Lars; Garritsen, Henk S P; Meyring, Wilhelm; Hannig, Horst; Pägelow, Ute; Wörmann, Bernhard; Piechaczek, Christoph; Geffers, Robert; Rohde, Manfred; Lindenmaier, Werner; Dittmar, Kurt E J
2010-04-01
Dendritic cells (DCs) are applied worldwide in several clinical studies of immune therapy of malignancies, autoimmune diseases, and transplantations. Most legislative bodies are demanding high standards for cultivation and transduction of cells. Closed-cell cultivating systems like cell culture bags would simplify and greatly improve the ability to reach these cultivation standards. We investigated if a new polyolefin cell culture bag enables maturation and adenoviral modification of human DCs in a closed system and compare the results with standard polystyrene flasks. Mononuclear cells were isolated from HLA-A*0201-positive blood donors by leukapheresis. A commercially available separation system (CliniMACS, Miltenyi Biotec) was used to isolate monocytes by positive selection using CD14-specific immunomagnetic beads. The essentially homogenous starting cell population was cultivated in the presence of granulocyte-macrophage-colony-stimulating factor and interleukin-4 in a closed-bag system in parallel to the standard flask cultivation system. Genetic modification was performed on Day 4. After induction of maturation on Day 5, mature DCs could be harvested and cryopreserved on Day 7. During the cultivation period comparative quality control was performed using flow cytometry, gene expression profiling, and functional assays. Both flasks and bags generated mature genetically modified DCs in similar yields. Surface membrane markers, expression profiles, and functional testing results were comparable. The use of a closed-bag system facilitated clinical applicability of genetically modified DCs. The polyolefin bag-based culture system yields DCs qualitatively and quantitatively comparable to the standard flask preparation. All steps including cryopreservation can be performed in a closed system facilitating standardized, safe, and reproducible preparation of therapeutic cells.
NASA Technical Reports Server (NTRS)
Shia, R.-L.; Yung, Y. L.
1986-01-01
The problem of multiple scattering of nonpolarized light in a planetary body of arbitrary shape illuminated by a parallel beam is formulated using the integral equation approach. There exists a simple functional whose stationarity condition is equivalent to solving the equation of radiative transfer and whose value at the stationary point is proportional to the differential cross section. The analysis reveals a direct relation between the microscopic symmetry of the phase function for each scattering event and the macroscopic symmetry of the differential cross section for the entire planetary body, and the interconnection of these symmetry relations and the variational principle. The case of a homogeneous sphere containing isotropic scatterers is investigated in detail. It is shown that the solution can be expanded in a multipole series such that the general spherical problem is reduced to solving a set of decoupled integral equations in one dimension. Computations have been performed for a range of parameters of interest, and illustrative examples of applications to planetary problems as provided.
Effect of Aerogel Anisotropy in Superfluid 3He-A
NASA Astrophysics Data System (ADS)
Zimmerman, A. M.; Li, J. I. A.; Pollanen, J.; Collett, C. A.; Gannon, W. J.; Halperin, W. P.
2014-03-01
Two theories have been advanced to describe the effects of anisotropic impurity introduced by stretched silica aerogel on the orientation of the orbital angular momentum l& circ; in superfluid 3He-A. These theories disagree on whether the anisotropy will orient l& circ; perpendicular[2] or parallel[3] to the strain axis. In order to examine this question we have produced and characterized a homogeneous aerogel sample with uniaxial anisotropy introduced during growth, corresponding to stretching of the aerogel. These samples have been shown to stabilize two new chiral states;[4] the higher temperature state being the subject of the present study. Using pulsed NMR we have performed experiments on 3He-A imbibed in this sample in two orientations: strain parallel and perpendicular to the applied magnetic field. From the NMR frequency shifts as a function of tip angle and temperature, we find that the angular momentum l& circ; is oriented along the strain axis, providing evidence for the theory advanced by Sauls. This work was supported by the National Science Foundation, DMR-1103625.
Surface energy and surface stress on vicinals by revisiting the Shuttleworth relation
NASA Astrophysics Data System (ADS)
Hecquet, Pascal
2018-04-01
In 1998 [Surf. Sci. 412/413, 639 (1998)], we showed that the step stress on vicinals varies as 1/L, L being the distance between steps, while the inter-step interaction energy primarily follows the law as 1/L2 from the well-known Marchenko-Parshin model. In this paper, we give a better understanding of the interaction term of the step stress. The step stress is calculated with respect to the nominal surface stress. Consequently, we calculate the diagonal surface stresses in both the vicinal system (x, y, z) where z is normal to the vicinal and the projected system (x, b, c) where b is normal to the nominal terrace. Moreover, we calculate the surface stresses by using two methods: the first called the 'Zero' method, from the surface pressure forces and the second called the 'One' method, by homogeneously deforming the vicinal in the parallel direction, x or y, and by calculating the surface energy excess proportional to the deformation. By using the 'One' method on the vicinal Cu(0 1 M), we find that the step deformations, due to the applied deformation, vary as 1/L by the same factor for the tensor directions bb and cb, and by twice the same factor for the parallel direction yy. Due to the vanishing of the surface stress normal to the vicinal, the variation of the step stress in the direction yy is better described by using only the step deformation in the same direction. We revisit the Shuttleworth formula, for while the variation of the step stress in the direction xx is the same between the two methods, the variation in the direction yy is higher by 76% for the 'Zero' method with respect to the 'One' method. In addition to the step energy, we confirm that the variation of the step stress must be taken into account for the understanding of the equilibrium of vicinals when they are not deformed.
Rethinking key–value store for parallel I/O optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He
2015-01-26
Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less
Jiang, Junfeng; Liu, Tiegen; Zhang, Yimo; Liu, Lina; Zha, Ying; Zhang, Fan; Wang, Yunxin; Long, Pin
2006-01-20
A parallel demodulation system for extrinsic Fabry-Perot interferometer (EFPI) and fiber Bragg grating (FBG) sensors is presented, which is based on a Michelson interferometer and combines the methods of low-coherence interference and a Fourier-transform spectrum. The parallel demodulation theory is modeled with Fourier-transform spectrum technology, and a signal separation method with an EFPI and FBG is proposed. The design of an optical path difference scanning and sampling method without a reference light is described. Experiments show that the parallel demodulation system has good spectrum demodulation and low-coherence interference demodulation performance. It can realize simultaneous strain and temperature measurements while keeping the whole system configuration less complex.
A global database with parallel measurements to study non-climatic changes
NASA Astrophysics Data System (ADS)
Venema, Victor; Auchmann, Renate; Aguilar, Enric
2015-04-01
n this work we introduce the rationale behind the ongoing compilation of a parallel measurements database, under the umbrella of the International Surface Temperatures Initiative (ISTI) and with the support of the World Meteorological Organization. We intend this database to become instrumental for a better understanding of inhomogeneities affecting the evaluation of long term changes in daily climate data. Long instrumental climate records are usually affected by non-climatic changes, due to, e.g., relocations and changes in instrumentation, instrument height or data collection and manipulation procedures. These so-called inhomogeneities distort the climate signal and can hamper the assessment of trends and variability. Thus to study climatic changes we need to accurately distinguish non-climatic and climatic signals. .The most direct way to study the influence of non-climatic changes on the distribution and to understand the reasons for these biases is the analysis of parallel measurements representing the old and new situation (in terms of e.g. instruments, location). According to the limited number of available studies and our understanding of the causes of inhomogeneity, we expect that they will have a strong impact on the tails of the distribution of temperatures and most likely of other climate elements. Our abilities to statistically homogenize daily data will be increased by systematically studying different causes of inhomogeneity replicated through parallel measurements. Current studies of non-climatic changes using parallel data are limited to local and regional case studies. However, the effect of specific transitions depends on the local climate and the most interesting climatic questions are about the systematic large-scale biases produced by transitions that occurred in many regions. Important potentially biasing transitions are the adoption of Stevenson screens, efforts to reduce undercatchment of precipitation or the move to automatic weather stations. Thus a large global parallel dataset is highly desirable as it allows for the study of systematic biases in the global record. In the ISTI Parallel Observations Science Team (POST), we will gather parallel data in their native format (to avoid undetectable conversion errors we will convert it to a standard format ourselves). We are interested in data from all climate variables at all time scales; from annual to sub-daily. High-resolution data is important for understanding the physical causes for the differences between the parallel measurements. For the same reason, we are also interested in other climate variables measured at the same station. For example, in case of parallel temperature measurements, the influencing factors are expected to be insolation, wind and clouds cover; in case of parallel precipitation measurements, wind and temperature are potentially important. Metadata that describe the parallel measurements is as important as the data itself and will be collected as well. For example, the types of the instruments, their siting, height, maintenance, etc. Because they are widely used to study moderate extremes, we will compute the indices of the Expert Team on Climate Change Detection and Indices (ETCCDI). In case the daily data cannot be shared, we would appreciate these indices from parallel measurements. For more information: http://tinyurl.com/ISTI-Parallel
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.
NASA Technical Reports Server (NTRS)
Gryphon, Coranth D.; Miller, Mark D.
1991-01-01
PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.
Karasick, M.S.; Strip, D.R.
1996-01-30
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.
Parallelization of the FLAPW method and comparison with the PPW method
NASA Astrophysics Data System (ADS)
Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur
2000-03-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.
IOPA: I/O-aware parallelism adaption for parallel programs
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236
IOPA: I/O-aware parallelism adaption for parallel programs.
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1995-01-01
This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.
Crosetto, D.B.
1996-12-31
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.
Crosetto, Dario B.
1996-01-01
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.
NASA Astrophysics Data System (ADS)
Singh, Santosh Kumar; Ghatak Choudhuri, Sumit
2018-05-01
Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.
Modularization of gradient-index optical design using wavefront matching enabled optimization.
Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H
2016-05-02
This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.
A simple and low-cost permanent magnet system for NMR
NASA Astrophysics Data System (ADS)
Chonlathep, K.; Sakamoto, T.; Sugahara, K.; Kondo, Y.
2017-02-01
We have developed a simple, easy to build, and low-cost magnet system for NMR, of which homogeneity is about 4 ×10-4 at 57 mT, with a pair of two commercially available ferrite magnets. This homogeneity corresponds to about 90 Hz spectral resolution at 2.45 MHz of the hydrogen Larmor frequency. The material cost of this NMR magnet system is little more than 100. The components can be printed by a 3D printer.
NASA Astrophysics Data System (ADS)
Tie, Lin
2017-08-01
In this paper, the controllability problem of two-dimensional discrete-time multi-input bilinear systems is completely solved. The homogeneous and the inhomogeneous cases are studied separately and necessary and sufficient conditions for controllability are established by using a linear algebraic method, which are easy to apply. Moreover, for the uncontrollable systems, near-controllability is considered and similar necessary and sufficient conditions are also obtained. Finally, examples are provided to demonstrate the results of this paper.
Boundary Korn Inequality and Neumann Problems in Homogenization of Systems of Elasticity
NASA Astrophysics Data System (ADS)
Geng, Jun; Shen, Zhongwei; Song, Liang
2017-06-01
This paper is concerned with a family of elliptic systems of linear elasticity with rapidly oscillating periodic coefficients, arising in the theory of homogenization. We establish uniform optimal regularity estimates for solutions of Neumann problems in a bounded Lipschitz domain with L 2 boundary data. The proof relies on a boundary Korn inequality for solutions of systems of linear elasticity and uses a large-scale Rellich estimate obtained in Shen (Anal PDE, arXiv:1505.00694v2).
Detecting opportunities for parallel observations on the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Lucks, Michael
1992-01-01
The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.
Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data
NASA Astrophysics Data System (ADS)
Guerrero, Thomas M.; Ricci, Anthony R.; Dahlbom, Magnus; Cherry, Simon R.; Hoffman, Edward T.
1993-07-01
The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.
YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste
Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less
Parallel dynamics between non-Hermitian and Hermitian systems
NASA Astrophysics Data System (ADS)
Wang, P.; Lin, S.; Jin, L.; Song, Z.
2018-06-01
We reveals a connection between non-Hermitian and Hermitian systems by studying the connection between a family of non-Hermitian and Hermitian Hamiltonians based on exact solutions. In general, for a dynamic process in a non-Hermitian system H , there always exists a parallel dynamic process governed by the corresponding Hermitian conjugate system H†. We show that a linear superposition of the two parallel dynamics is exactly equivalent to the time evolution of a state under a Hermitian Hamiltonian H , and we present the relations between {H ,H ,H†} .
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
File-access characteristics of parallel scientific workloads
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David; Purakayastha, Apratim; Best, Michael; Ellis, Carla Schlatter
1995-01-01
Phenomenal improvements in the computational performance of multiprocessors have not been matched by comparable gains in I/O system performance. This imbalance has resulted in I/O becoming a significant bottleneck for many scientific applications. One key to overcoming this bottleneck is improving the performance of parallel file systems. The design of a high-performance parallel file system requires a comprehensive understanding of the expected workload. Unfortunately, until recently, no general workload studies of parallel file systems have been conducted. The goal of the CHARISMA project was to remedy this problem by characterizing the behavior of several production workloads, on different machines, at the level of individual reads and writes. The first set of results from the CHARISMA project describe the workloads observed on an Intel iPSC/860 and a Thinking Machines CM-5. This paper is intended to compare and contrast these two workloads for an understanding of their essential similarities and differences, isolating common trends and platform-dependent variances. Using this comparison, we are able to gain more insight into the general principles that should guide parallel file-system design.
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
Conversion of CO2 from Air into Methanol Using a Polyamine and a Homogeneous Ruthenium Catalyst.
Kothandaraman, Jotheeswari; Goeppert, Alain; Czaun, Miklos; Olah, George A; Prakash, G K Surya
2016-01-27
A highly efficient homogeneous catalyst system for the production of CH3OH from CO2 using pentaethylenehexamine and Ru-Macho-BH (1) at 125-165 °C in an ethereal solvent has been developed (initial turnover frequency = 70 h(-1) at 145 °C). Ease of separation of CH3OH is demonstrated by simple distillation from the reaction mixture. The robustness of the catalytic system was shown by recycling the catalyst over five runs without significant loss of activity (turnover number > 2000). Various sources of CO2 can be used for this reaction including air, despite its low CO2 concentration (400 ppm). For the first time, we have demonstrated that CO2 captured from air can be directly converted to CH3OH in 79% yield using a homogeneous catalytic system.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control.
Chiang, Mao-Hsiung; Lin, Hao-Ting
2011-01-01
This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot’s end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H∞ tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to verify the feasibility of the proposed parallel mechanism robot driven by three vertical pneumatic servo actuators, a full-scale test rig of the proposed parallel mechanism pneumatic robot is set up. Thus, simulations and experiments for different complex 3D motion profiles of the robot end-effector can be successfully achieved. The desired, the actual and the calculated 3D position of the end-effector can be compared in the complex 3D motion control. PMID:22247676
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
Layout optimization using the homogenization method
NASA Technical Reports Server (NTRS)
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
Organic, Organometallic and Bioorganic Catalysts for Electrochemical Reduction of CO2
Schlager, Stefanie; Portenkirchner, Engelbert; Sariciftci, Niyazi Serdar
2017-01-01
Abstract A broad review of homogeneous and heterogeneous catalytic approaches toward CO2 reduction using organic, organometallic, and bioorganic systems is provided. Electrochemical, bioelectrochemical and photoelectrochemical approaches are discussed in terms of their faradaic efficiencies, overpotentials and reaction mechanisms. Organometallic complexes as well as semiconductors and their homogeneous and heterogeneous catalytic activities are compared to enzymes. In both cases, their immobilization on electrodes is discussed and compared to homogeneous catalysts in solution. PMID:28383174
Lother, Steffen; Schiff, Steven J; Neuberger, Thomas; Jakob, Peter M; Fidler, Florian
2016-08-01
In this work, a prototype of an effective electromagnet with a field-of-view (FoV) of 140 mm for neonatal head imaging is presented. The efficient implementation succeeded by exploiting the use of steel plates as a housing system. We achieved a compromise between large sample volumes, high homogeneity, high B0 field, low power consumption, light weight, simple fabrication, and conserved mobility without the necessity of a dedicated water cooling system. The entire magnetic resonance imaging (MRI) system (electromagnet, gradient system, transmit/receive coil, control system) is introduced and its unique features discussed. Furthermore, simulations using a numerical optimization algorithm for magnet and gradient system are presented. Functionality and quality of this low-field scanner operating at 23 mT (generated with 500 W) is illustrated using spin-echo imaging (in-plane resolution 1.6 mm × 1.6 mm, slice thickness 5 mm, and signal-to-noise ratio (SNR) of 23 with a acquisition time of 29 min). B0 field-mapping measurements are presented to characterize the homogeneity of the magnet, and the B0 field limitations of 80 mT of the system are fully discussed. The cryogen-free system presented here demonstrates that this electromagnet with a ferromagnetic housing can be optimized for MRI with an enhanced and homogeneous magnetic field. It offers an alternative to prepolarized MRI designs in both readout field strength and power use. There are multiple indications for the clinical medical application of such low-field devices.
Schiff, Steven J.; Neuberger, Thomas; Jakob, Peter M.; Fidler, Florian
2017-01-01
Objective In this work, a prototype of an effective electromagnet with a field-of-view (FoV) of 140 mm for neonatal head imaging is presented. The efficient implementation succeeded by exploiting the use of steel plates as a housing system. We achieved a compromise between large sample volumes, high homogeneity, high B0 field, low power consumption, light weight, simple fabrication, and conserved mobility without the necessity of a dedicated water cooling system. Materials and methods The entire magnetic resonance imaging (MRI) system (electromagnet, gradient system, transmit/receive coil, control system) is introduced and its unique features discussed. Furthermore, simulations using a numerical optimization algorithm for magnet and gradient system are presented. Results Functionality and quality of this low-field scanner operating at 23 mT (generated with 500 W) is illustrated using spin-echo imaging (in-plane resolution 1.6 mm × 1.6 mm, slice thickness 5 mm, and signal-to-noise ratio (SNR) of 23 with a acquisition time of 29 min). B0 field-mapping measurements are presented to characterize the homogeneity of the magnet, and the B0 field limitations of 80 mT of the system are fully discussed. Conclusion The cryogen-free system presented here demonstrates that this electromagnet with a ferromagnetic housing can be optimized for MRI with an enhanced and homogeneous magnetic field. It offers an alternative to pre-polarized MRI designs in both readout field strength and power use. There are multiple indications for the clinical medical application of such low-field devices. PMID:26861046
A hybrid algorithm for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Mangiardi, Chris M.; Meyer, R.
2017-10-01
This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.
Event parallelism: Distributed memory parallel computing for high energy physics experiments
NASA Astrophysics Data System (ADS)
Nash, Thomas
1989-12-01
This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.
The AIS-5000 parallel processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, L.A.; Wilson, S.S.
1988-05-01
The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less
Code Optimization and Parallelization on the Origins: Looking from Users' Perspective
NASA Technical Reports Server (NTRS)
Chang, Yan-Tyng Sherry; Thigpen, William W. (Technical Monitor)
2002-01-01
Parallel machines are becoming the main compute engines for high performance computing. Despite their increasing popularity, it is still a challenge for most users to learn the basic techniques to optimize/parallelize their codes on such platforms. In this paper, we present some experiences on learning these techniques for the Origin systems at the NASA Advanced Supercomputing Division. Emphasis of this paper will be on a few essential issues (with examples) that general users should master when they work with the Origins as well as other parallel systems.
Franciò, Giancarlo; Hintermair, Ulrich; Leitner, Walter
2015-01-01
Solution-phase catalysis using molecular transition metal complexes is an extremely powerful tool for chemical synthesis and a key technology for sustainable manufacturing. However, as the reaction complexity and thermal sensitivity of the catalytic system increase, engineering challenges associated with product separation and catalyst recovery can override the value of the product. This persistent downstream issue often renders industrial exploitation of homogeneous catalysis uneconomical despite impressive batch performance of the catalyst. In this regard, continuous-flow systems that allow steady-state homogeneous turnover in a stationary liquid phase while at the same time effecting integrated product separation at mild process temperatures represent a particularly attractive scenario. While continuous-flow processing is a standard procedure for large volume manufacturing, capitalizing on its potential in the realm of the molecular complexity of organic synthesis is still an emerging area that requires innovative solutions. Here we highlight some recent developments which have succeeded in realizing such systems by the combination of near- and supercritical fluids with homogeneous catalysts in supported liquid phases. The cases discussed exemplify how all three levels of continuous-flow homogeneous catalysis (catalyst system, separation strategy, process scheme) must be matched to locate viable process conditions. PMID:26574523
Franciò, Giancarlo; Hintermair, Ulrich; Leitner, Walter
2015-12-28
Solution-phase catalysis using molecular transition metal complexes is an extremely powerful tool for chemical synthesis and a key technology for sustainable manufacturing. However, as the reaction complexity and thermal sensitivity of the catalytic system increase, engineering challenges associated with product separation and catalyst recovery can override the value of the product. This persistent downstream issue often renders industrial exploitation of homogeneous catalysis uneconomical despite impressive batch performance of the catalyst. In this regard, continuous-flow systems that allow steady-state homogeneous turnover in a stationary liquid phase while at the same time effecting integrated product separation at mild process temperatures represent a particularly attractive scenario. While continuous-flow processing is a standard procedure for large volume manufacturing, capitalizing on its potential in the realm of the molecular complexity of organic synthesis is still an emerging area that requires innovative solutions. Here we highlight some recent developments which have succeeded in realizing such systems by the combination of near- and supercritical fluids with homogeneous catalysts in supported liquid phases. The cases discussed exemplify how all three levels of continuous-flow homogeneous catalysis (catalyst system, separation strategy, process scheme) must be matched to locate viable process conditions. © 2015 The Authors.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quettier, Lionel
A neuroscience research center with very high field MRI equipments has been opened in November 2006 by the CEA life science division. One of the imaging systems will require a 11.75 T magnet with a 900 mm warm bore, the so-call Iseult/Inumac magnet. Regarding the large aperture and field strength, this magnet is a challenge as compared to the largest MRI systems ever built, and is then developed within an ambitious R&D program. With the objective of demonstrating the possibility of achieving field homogeneity better than 1 ppm using double pancake windings, a 24 double pancakes model coil, working atmore » 1.5 T has been designed. This model magnet has been manufactured by Alstom MSA and tested at CEA. It has been measured with a very high precision, in order to fully characterize the field homogeneity, and then to investigate and discriminate the parameters that influence the field map. This magnet has reached the bare magnet field homogeneity specification expected for Iseult and thus successfully demonstrated the feasibility of building a homogenous magnet with the double pancake winding technique.« less
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
Costa-Font, Joan; Kanavos, Panos
2007-01-01
To examine the effects of parallel simvastatin importation on drug price in three of the main parallel importing countries in the European Union, namely the United Kingdom, Germany, and the Netherlands. To estimate the market share of parallel imported simvastatin and the unit price -both locally produced and parallel imported- adjusted by defined daily dose in the importing country and in the exporting country (Spain). Ordinary least squares regression was used to examine the potential price competition resulting from parallel drug trade between 1997 and 2002. The market share of parallel imported simvastatin progressively expanded (especially in the United Kingdom and Germany) in the period examined, although the price difference between parallel imported and locally sourced simvastatin was not significant. Prices tended to rise in the United Kingdom and Germany and declined in the Netherlands. We found no evidence of pro-competitive effects resulting from the expansion of parallel trade. The development of parallel drug importation in the European Union produced unexpected effects (limited competition) on prices that differ from those expected by the introduction of a new competitor. This is partially the result of drug price regulation scant incentives to competition and of the lack of transparency in the drug reimbursement system, especially due to the effect of informal discounts (not observable to researchers). The case of simvastatin reveals that savings to the health system from parallel trade are trivial. Finally, of the three countries examined, the only country that shows a moderate downward pattern in simvastatin prices is the Netherlands. This effect can be attributed to the existence of a system that claws back informal discounts.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Architectures for reasoning in parallel
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.
1989-01-01
The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Adaptive parallel logic networks
NASA Technical Reports Server (NTRS)
Martinez, Tony R.; Vidal, Jacques J.
1988-01-01
Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieira, M.; Fantoni, A.; Martins, R.
1994-12-31
Using the Flying Spot Technique (FST) the authors have studied minority carrier transport parallel and perpendicular to the surface of amorphous silicon films (a-Si:H). To reduce slow transients due to charge redistribution in low resistivity regions during the measurement they have applied a strong homogeneously absorbed bias light. The defect density was estimated from Constant Photocurrent Method (CPM) measurements. The steady-state photocarrier grating technique (SSPG) is a 1-dimensional approach. However, the modulation depth of the carrier profile is also dependent on film surface properties, like surface recombination velocity. Both methods yield comparable diffusion lengths when applied to a-Si:H.
Transient response of a laminated composite plate
NASA Technical Reports Server (NTRS)
Datta, S. K.; Ju, T. H.; Bratton, R. L.; Shah, A. H.
1992-01-01
Results are presented from an investigation of the effect of layering on transient wave propagation in a laminated cross-ply plate, giving attention to the case of 2D plane strain in the case where a line vertical force is applied on a free surface of the plate; the line may be either parallel or perpendicular to the fibers in a ply. The results are in both the time and frequency domains for the normal stress component in the x direction, at a point on the surface of the plate on which the force is applied. Comparative results are also presented for a homogeneous plate whose properties are the static effective ones, when the number of plies is large.
Another new species of Phyllodytes (Anura: Hylidae) from the Atlantic Forest of northeastern Brazil.
Orrico, Victor G D; Dias, Iuri R; Marciano, Euvaldo
2018-04-09
A new species of the genus Phyllodytes is described from the State of Bahia, in the Atlantic Rain Forest of Northeastern Brazil. Phyllodytes praeceptor sp. nov. can be differentiated from other species of Phyllodytes by its medium size (SVL 20.7-25.8 mm in males); odontoids moderately developed; vocal sac externally visible; eyes large and prominent; dorsum homogenously cream, except for a few scattered spots and blotches; venter areolate with two parallel, paramedial lines of larger tubercles; few tubercles in the ventral surface of thighs, the largest being the medial one; a large tubercle on the skin around the tibio-tarsal articulation; nuptial pad rounded and moderately expanded.
Geometry induced phase transitions in magnetic spherical shell
NASA Astrophysics Data System (ADS)
Sloika, Mykola I.; Sheka, Denis D.; Kravchuk, Volodymyr P.; Pylypovskyi, Oleksandr V.; Gaididei, Yuri
2017-12-01
Equilibrium magnetization states in spherical shells of a magnetically soft ferromagnet form two out-of-surface vortices with codirectionally magnetized vortex cores at the sphere poles: (i) a whirligig state with the in-surface magnetization oriented along parallels is typical for thick shells; (ii) a three dimensional onion state with the in-surface meridional direction of the magnetization is realized in thin shells. The geometry of spherical shell prohibits an existence of spatially homogeneous magnetization distribution, even in the case of small sample radii. By varying geometrical parameters a continuous phase transition between the whirligig and onion states takes place. The detailed analytical description of the phase diagram is well confirmed by micromagnetic simulations.
A law of the wall for turbulent boundary layers with suction: Stevenson's formula revisited
NASA Astrophysics Data System (ADS)
Vigdorovich, Igor
2016-08-01
The turbulent velocity field in the viscous sublayer of the boundary layer with suction to a first approximation is homogeneous in any direction parallel to the wall and is determined by only three constant quantities — the wall shear stress, the suction velocity, and the fluid viscosity. This means that there exists a finite algebraic relation between the turbulent shear stress and the longitudinal mean-velocity gradient, using which as a closure condition for the equations of motion, we establish an exact asymptotic behavior of the velocity profile at the outer edge of the viscous sublayer. The obtained relationship provides a generalization of the logarithmic law to the case of wall suction.
Some aeroacoustic and aerodynamic applications of the theory of nonequilibrium thermodynamics
NASA Technical Reports Server (NTRS)
Horne, W. Clifton; Smith, Charles A.; Karamcheti, Krishnamurty
1990-01-01
An exact equation is derived for the dissipation function of a homogeneous, isotropic, Newtonian fluid, with terms associated with irreversible compression or expansion, wave radiation, and the square of the vorticity. This and other forms of the dissipation function are used to identify simple flows, such as incompressible channel flow, the potential vortex with rotational core, and incompressible, irrotational flow as minimally dissipative distributions. A comparison of the hydrodynamic and thermodynamic stability characteristics of a parallel shear flow suggests that an association exists between flow stability and the variation of net dissipation with disturbance amplitude, and that nonlinear effects, such as bounded disturbance amplitude, may be examined from a thermodynamic basis.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Parallel checksumming of data chunks of a shared data object using a log-structured file system
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-09-06
Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.
NASA Astrophysics Data System (ADS)
Cornaggia, Flaminia; Jovane, Luigi; Alessandretti, Luciano; Alves de Lima Ferreira, Paulo; Lopes Figueira, Rubens C.; Rodelli, Daniel; Bueno Benedetti Berbel, Gláucia; Braga, Elisabete S.
2018-04-01
The Cananéia-Iguape system is a combined estuarine-lagoonal sedimentary system, located along the SE coast of Brazil. It consists of a network of channels and islands oriented mainly parallel to the coast. About 165 years ago, an artificial channel, the Valo Grande, was opened in the northern part of this system to connect a major river of the region, the Ribeira River, to the estuarine-lagoon complex. The Valo Grande was closed with a dam and re-opened twice between 1978 and 1995, when it was finally left open. These openings and closures of the Valo Grande had a significant influence on the Cananéia-Iguape system. In this study we present mineralogical, chemical, palaeomagnetic, and geochronological data from a sediment core collected at the southern end of the 50-km long lagoonal system showing how the phases of the opening and closure of the channel through time are expressed in the sedimentary record. Despite the homogeneity of the grain size and magnetic properties throughout the core, significant variations in the mineralogical composition showed the influence of the opening of the channel on the sediment supply. Less mature sediment, with lower quartz and halite and higher kaolinite, brucite, and franklinite, corresponded to periods when the Valo Grande was open. On the other hand, higher abundance of quartz and halite, as well as the disappearance of other detrital minerals, corresponded with periods of absence or closure of the channel, indicating a more sea-influenced depositional setting. This work represented an example of anthropogenic influence in a lagoonal-estuarine sedimentary system, which is a common context along the coast of Brazil.
Carbonized Micro- and Nanostructures: Can Downsizing Really Help?
Naraghi, Mohammad; Chawla, Sneha
2014-01-01
In this manuscript, we discuss relationships between morphology and mechanical strength of carbonized structures, obtained via pyrolysis of polymeric precursors, across multiple length scales, from carbon fibers (CFs) with diameters of 5–10 μm to submicron thick carbon nanofibers (CNFs). Our research points to radial inhomogeneity, skin–core structure, as a size-dependent feature of polyacrylonitrile-based CFs. This inhomogeneity is a surface effect, caused by suppressed diffusion of oxygen and stabilization byproducts during stabilization through skin. Hence, reducing the precursor diameters from tens of microns to submicron appears as an effective strategy to develop homogeneous carbonized structures. Our research establishes the significance of this downsizing in developing lightweight structural materials by comparing intrinsic strength of radially inhomogeneous CFs with that of radially homogeneous CNF. While experimental studies on the strength of CNFs have targeted randomly oriented turbostratic domains, via continuum modeling, we have estimated that strength of CNFs can reach 14 GPa, when the basal planes of graphitic domains are parallel to nanofiber axis. The CNFs in our model are treated as composites of amorphous carbon (matrix), reinforced with turbostratic domains, and their strength is predicted using Tsai–Hill criterion. The model was calibrated with existing experimental data. PMID:28788651
Turbulent Mixing in Gravity Currents with Transverse Shear
NASA Astrophysics Data System (ADS)
White, Brian; Helfrich, Karl; Scotti, Alberto
2010-11-01
A parallel flow with horizontal shear and horizontal density gradient undergoes an intensification of the shear by gravitational tilting and stretching, rapidly breaking down into turbulence. Such flows have the potential for substantial mixing in estuaries and the coastal ocean. We present high-resolution numerical results for the mixing efficiency of these flows, which can be viewed as gravity currents with transverse shear, and contrast them with the well-studied case of stably stratified, homogeneous turbulence (uniform vertical density and velocity gradients). For a sheared gravity current, the buoyancy flux, turbulent Reynolds stress, and dissipation are well out of equilibrium. The total kinetic energy first increases as potential energy is transferred to the gravity current, but rapidly decays once turbulence sets in. Despite the non-equilibrium character, mixing efficiencies are slightly higher but qualitatively similar to homogeneous stratified turbulence. Efficiency decreases in the highly energetic regime where the dissipation rate is large compared with viscosity and stratification, ɛ/(νN^2)>100, further declining as turbulence decays and kinetic energy dissipation dominates the buoyancy flux. In general, the mixing rate, parameterized by a turbulent eddy diffusivity, increases with the strength of the transverse shear.
Brink, Wyger M; Versluis, Maarten J; Peeters, Johannes M; Börnert, Peter; Webb, Andrew G
2016-12-01
To explore the effects of high permittivity dielectric pads on the transmit and receive characteristics of a 3 Tesla body coil centered at the thighs, and their implications on image uniformity in receive array applications. Transmit and receive profiles of the body coil with and without dielectric pads were simulated and measured in healthy volunteers. Parallel imaging was performed using sensitivity encoding (SENSE) with and without pads. An intensity correction filter was constructed from the measured receive profile of the body coil. Measured and simulated data show that the dielectric pads improve the transmit homogeneity of the body coil in the thighs, but decrease its receive homogeneity, which propagates into reconstruction algorithms in which the body coil is used as a reference. However, by correcting for the body coil reception profile this effect can be mitigated. Combining high permittivity dielectric pads with an appropriate body coil receive sensitivity filter improves the image uniformity substantially compared with the situation without pads. Magn Reson Med 76:1951-1956, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.
Raman Spectroscopy of 3-D Printed Polymers
NASA Astrophysics Data System (ADS)
Espinoza, Vanessa; Wood, Erin; Hight Walker, Angela; Seppala, Jonathan; Kotula, Anthony
Additive manufacturing (AM) techniques, such as 3-D printing are becoming an innovative and efficient way to produce highly customized parts for applications ranging from automotive to biomedical. Polymer-based AM parts can be produced from a myriad of materials and processing conditions to enable application-specific products. However, bringing 3-D printing from prototype to production relies on understanding the effect of processing conditions on the final product. Raman spectroscopy is a powerful and non-destructive characterization technique that can assist in determining the chemical homogeneity and physical alignment of polymer chains in 3-D printed materials. Two polymers commonly used in 3-D printing, acrylonitrile butadiene styrene (ABS) and polycarbonate (PC), were investigated using 1- and 2-D hyperspectral Raman imaging. In the case of ABS, a complex thermoplastic, the homogeneity of the material through the weld zone was investigated by comparing Raman peaks from each of the three components. In order to investigate the effect of processing conditions on polymer chain alignment, polarized Raman spectroscopy was used. In particular, the print speed or shear rate and effect of strain on PC filaments was investigated with perpendicular and parallel polarizations. National Institute of Standards and Technology Gaithersburg, MD ; Society of Physics Students.
Human collagen produced in plants: more than just another molecule.
Shoseyov, Oded; Posen, Yehudit; Grynspan, Frida
2014-01-01
Consequential to its essential role as a mechanical support and affinity regulator in extracellular matrices, collagen constitutes a highly sought after scaffolding material for regeneration and healing applications. However, substantiated concerns have been raised with regard to quality and safety of animal tissue-extracted collagen, particularly in relation to its immunogenicity, risk of disease transmission and overall quality and consistency. In parallel, contamination with undesirable cellular factors can significantly impair its bioactivity, vis-a-vis its impact on cell recruitment, proliferation and differentiation. High-scale production of recombinant human collagen Type I (rhCOL1) in the tobacco plant provides a source of an homogenic, heterotrimeric, thermally stable "virgin" collagen which self assembles to fine homogenous fibrils displaying intact binding sites and has been applied to form numerous functional scaffolds for tissue engineering and regenerative medicine. In addition, rhCOL1 can form liquid crystal structures, yielding a well-organized and mechanically strong membrane, two properties indispensable to extracellular matrix (ECM) mimicry. Overall, the shortcomings of animal- and cadaver-derived collagens arising from their source diversity and recycled nature are fully overcome in the plant setting, constituting a collagen source ideal for tissue engineering and regenerative medicine applications.
Apollo: AN Automatic Procedure to Forecast Transport and Deposition of Tephra
NASA Astrophysics Data System (ADS)
Folch, A.; Costa, A.; Macedonio, G.
2007-05-01
Volcanic ash fallout represents a serious threat to communities around active volcanoes. Reliable short term predictions constitute a valuable support for to mitigate the effects of fallout on the surrounding area during an episode of crisis. We present a platform-independent automatic procedure aimed to daily forecast volcanic ash dispersal. The procedure builds on a series of programs and interfaces that allow an automatic data/results flow. Firstly the procedure downloads mesoscale meteorological forecasts for the region and period of interest, filters and converts data from its native format (typically GRIB format files), and sets up the CALMET diagnostic meteorological model to obtain hourly wind field and micro-meteorological variables on a finer mesh. Secondly a 1-D version of the buoyant plume equations assesses the distribution of mass along the eruptive column depending on the obtained wind field and on the conditions at the vent (granulometry, mass flow rate, etc.). All these data are used as input for the ash dispersion model(s). Any model able to face physical complexity and coupling processes with adequate solving times can be plugged into the system by means of an interface. Currently, the procedure contains the models HAZMAP, TEPHRA and FALL3D, the latter in both serial and parallel versions. Parallelization of FALL3D is done at two levels one for particle classes and one for spatial domain. The last step is to post-processes the model(s) outcomes to end up with homogeneous maps written on portable format files. Maps plot relevant quantities such as predicted ground load, expected deposit thickness or visual and flight safety concentration thresholds. Several applications are shown as examples.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
A parallel expert system for the control of a robotic air vehicle
NASA Technical Reports Server (NTRS)
Shakley, Donald; Lamont, Gary B.
1988-01-01
Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.
Multibus-based parallel processor for simulation
NASA Technical Reports Server (NTRS)
Ogrady, E. P.; Wang, C.-H.
1983-01-01
A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.
A Tutorial on Parallel and Concurrent Programming in Haskell
NASA Astrophysics Data System (ADS)
Peyton Jones, Simon; Singh, Satnam
This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.
An iterative method for systems of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1989-01-01
An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.
RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)
It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...
Contagion on complex networks with persuasion
NASA Astrophysics Data System (ADS)
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Contagion on complex networks with persuasion
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-01-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense. PMID:27029498
Contagion on complex networks with persuasion.
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-31
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Yue, Jun; Rebrov, Evgeny V; Schouten, Jaap C
2014-05-07
We report a three-phase slug flow and a parallel-slug flow as two major flow patterns found under the nitrogen-decane-water flow through a glass microfluidic chip which features a long microchannel with a hydraulic diameter of 98 μm connected to a cross-flow mixer. The three-phase slug flow pattern is characterized by a flow of decane droplets containing single elongated nitrogen bubbles, which are separated by water slugs. This flow pattern was observed at a superficial velocity of decane (in the range of about 0.6 to 10 mm s(-1)) typically lower than that of water for a given superficial gas velocity in the range of 30 to 91 mm s(-1). The parallel-slug flow pattern is characterized by a continuous water flow in one part of the channel cross section and a parallel flow of decane with dispersed nitrogen bubbles in the adjacent part of the channel cross section, which was observed at a superficial velocity of decane (in the range of about 2.5 to 40 mm s(-1)) typically higher than that of water for each given superficial gas velocity. The three-phase slug flow can be seen as a superimposition of both decane-water and nitrogen-decane slug flows observed in the chip when the flow of the third phase (viz. nitrogen or water, respectively) was set at zero. The parallel-slug flow can be seen as a superimposition of the decane-water parallel flow and the nitrogen-decane slug flow observed in the chip under the corresponding two-phase flow conditions. In case of small capillary numbers (Ca ≪ 0.1) and Weber numbers (We ≪ 1), the developed two-phase pressure drop model under a slug flow has been extended to obtain a three-phase slug flow model in which the 'nitrogen-in-decane' droplet is assumed as a pseudo-homogeneous droplet with an effective viscosity. The parallel flow and slug flow pressure drop models have been combined to obtain a parallel-slug flow model. The obtained models describe the experimental pressure drop with standard deviations of 8% and 12% for the three-phase slug flow and parallel-slug flow, respectively. An example is given to illustrate the model uses in designing bifurcated microchannels that split the three-phase slug flow for high-throughput processing.
A nonrecursive order N preconditioned conjugate gradient: Range space formulation of MDOF dynamics
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.
1990-01-01
While excellent progress has been made in deriving algorithms that are efficient for certain combinations of system topologies and concurrent multiprocessing hardware, several issues must be resolved to incorporate transient simulation in the control design process for large space structures. Specifically, strategies must be developed that are applicable to systems with numerous degrees of freedom. In addition, the algorithms must have a growth potential in that they must also be amenable to implementation on forthcoming parallel system architectures. For mechanical system simulation, this fact implies that algorithms are required that induce parallelism on a fine scale, suitable for the emerging class of highly parallel processors; and transient simulation methods must be automatically load balancing for a wider collection of system topologies and hardware configurations. These problems are addressed by employing a combination range space/preconditioned conjugate gradient formulation of multi-degree-of-freedom dynamics. The method described has several advantages. In a sequential computing environment, the method has the features that: by employing regular ordering of the system connectivity graph, an extremely efficient preconditioner can be derived from the 'range space metric', as opposed to the system coefficient matrix; because of the effectiveness of the preconditioner, preliminary studies indicate that the method can achieve performance rates that depend linearly upon the number of substructures, hence the title 'Order N'; and the method is non-assembling. Furthermore, the approach is promising as a potential parallel processing algorithm in that the method exhibits a fine parallel granularity suitable for a wide collection of combinations of physical system topologies/computer architectures; and the method is easily load balanced among processors, and does not rely upon system topology to induce parallelism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quandt, Norman; Roth, Robert; Syrowatka, Frank
2016-01-15
Bilayer films of MFe{sub 2}O{sub 4} (M=Co, Ni) and BaTiO{sub 3} were prepared by spin coating of N,N-dimethylformamide/acetic acid solutions on platinum coated silicon wafers. Five coating steps were applied to get the desired thickness of 150 nm for both the ferrite and perovskite layer. XRD, IR and Raman spectroscopy revealed the formation of phase-pure ferrite spinels and BaTiO{sub 3}. Smooth surfaces with roughnesses in the order of 3 to 5 nm were found in AFM investigations. Saturation magnetization of 347 emu cm{sup −3} for the CoFe{sub 2}O{sub 4}/BaTiO{sub 3} and 188 emu cm{sup −3} for the NiFe{sub 2}O{sub 4}/BaTiO{submore » 3} bilayer, respectively were found. For the CoFe{sub 2}O{sub 4}/BaTiO{sub 3} bilayer a strong magnetic anisotropy was observed with coercivity fields of 5.1 kOe and 3.3 kOe (applied magnetic field perpendicular and parallel to film surface), while for the NiFe{sub 2}O{sub 4}/BaTiO{sub 3} bilayer this effect is less pronounced. Saturated polarization hysteresis loops prove the presence of ferroelectricity in both systems. - Graphical abstract: The SEM image of the CoFe{sub 2}O{sub 4}/BaTiO{sub 3} bilayer on Pt–Si-substrate (left), magnetization as a function of the magnetic field perpendicular and parallel to the film plane (right top) and P–E and I–V hysteresis loops of the bilayer at room temperature. - Highlights: • Ferrite and perovskite oxides grown on platinum using spin coating technique. • Columnar growth of cobalt ferrite particle on the substrate. • Surface investigation showed a homogenous and smooth surface. • Perpendicular and parallel applied magnetic field revealed a magnetic anisotropy. • Switching peaks and saturated P–E hysteresis loops show ferroelectricity.« less
MCNPX simulation of proton dose distribution in homogeneous and CT phantoms
NASA Astrophysics Data System (ADS)
Lee, C. C.; Lee, Y. J.; Tung, C. J.; Cheng, H. W.; Chao, T. C.
2014-02-01
A dose simulation system was constructed based on the MCNPX Monte Carlo package to simulate proton dose distribution in homogeneous and CT phantoms. Conversion from Hounsfield unit of a patient CT image set to material information necessary for Monte Carlo simulation is based on Schneider's approach. In order to validate this simulation system, inter-comparison of depth dose distributions among those obtained from the MCNPX, GEANT4 and FLUKA codes for a 160 MeV monoenergetic proton beam incident normally on the surface of a homogeneous water phantom was performed. For dose validation within the CT phantom, direct comparison with measurement is infeasible. Instead, this study took the approach to indirectly compare the 50% ranges (R50%) along the central axis by our system to the NIST CSDA ranges for beams with 160 and 115 MeV energies. Comparison result within the homogeneous phantom shows good agreement. Differences of simulated R50% among the three codes are less than 1 mm. For results within the CT phantom, the MCNPX simulated water equivalent Req,50% are compatible with the CSDA water equivalent ranges from the NIST database with differences of 0.7 and 4.1 mm for 160 and 115 MeV beams, respectively.
NASA Astrophysics Data System (ADS)
Sheynin, Yuriy; Shutenko, Felix; Suvorova, Elena; Yablokov, Evgenej
2008-04-01
High rate interconnections are important subsystems in modern data processing and control systems of many classes. They are especially important in prospective embedded and on-board systems that used to be multicomponent systems with parallel or distributed architecture, [1]. Modular architecture systems of previous generations were based on parallel busses that were widely used and standardised: VME, PCI, CompactPCI, etc. Busses evolution went in improvement of bus protocol efficiency (burst transactions, split transactions, etc.) and increasing operation frequencies. However, due to multi-drop bus nature and multi-wire skew problems the parallel bussing speedup became more and more limited. For embedded and on-board systems additional reason for this trend was in weight, size and power constraints of an interconnection and its components. Parallel interfaces have become technologically more challenging as their respective clock frequencies have increased to keep pace with the bandwidth requirements of their attached storage devices. Since each interface uses a data clock to gate and validate the parallel data (which is normally 8 bits or 16 bits wide), the clock frequency need only be equivalent to the byte rate or word rate being transmitted. In other words, for a given transmission frequency, the wider the data bus, the slower the clock. As the clock frequency increases, more high frequency energy is available in each of the data lines, and a portion of this energy is dissipated in radiation. Each data line not only transmits this energy but also receives some from its neighbours. This form of mutual interference is commonly called "cross-talk," and the signal distortion it produces can become another major contributor to loss of data integrity unless compensated by appropriate cable designs. Other transmission problems such as frequency-dependent attenuation and signal reflections, while also applicable to serial interfaces, are more troublesome in parallel interfaces due to the number of additional cable conductors involved. In order to compensate for these drawbacks, higher quality cables, shorter cable runs and fewer devices on the bus have been the norm. Finally, the physical bulk of the parallel cables makes them more difficult to route inside an enclosure, hinders cooling airflow and is incompatible with the trend toward smaller form-factor devices. Parallel busses worked in systems during the past 20 years, but the accumulated problems dictate the need for change and the technology is available to spur the transition. The general trend in high-rate interconnections turned from parallel bussing to scalable interconnections with a network architecture and high-rate point-to-point links. Analysis showed that data links with serial information transfer could achieve higher throughput and efficiency and it was confirmed in various research and practical design. Serial interfaces offer an improvement over older parallel interfaces: better performance, better scalability, and also better reliability as the parallel interfaces are at their limits of speed with reliable data transfers and others. The trend was implemented in major standards' families evolution: e.g. from PCI/PCI-X parallel bussing to PCIExpress interconnection architecture with serial lines, from CompactPCI parallel bus to ATCA (Advanced Telecommunications Architecture) specification with serial links and network topologies of an interconnection, etc. In the article we consider a general set of characteristics and features of serial interconnections, give a brief overview of serial interconnections specifications. In more details we present the SpaceWire interconnection technology. Have been developed for space on-board systems applications the SpaceWire has important features and characteristics that make it a prospective interconnection for wide range of embedded systems.
Aerodynamic simulation on massively parallel systems
NASA Technical Reports Server (NTRS)
Haeuser, Jochem; Simon, Horst D.
1992-01-01
This paper briefly addresses the computational requirements for the analysis of complete configurations of aircraft and spacecraft currently under design to be used for advanced transportation in commercial applications as well as in space flight. The discussion clearly shows that massively parallel systems are the only alternative which is both cost effective and on the other hand can provide the necessary TeraFlops, needed to satisfy the narrow design margins of modern vehicles. It is assumed that the solution of the governing physical equations, i.e., the Navier-Stokes equations which may be complemented by chemistry and turbulence models, is done on multiblock grids. This technique is situated between the fully structured approach of classical boundary fitted grids and the fully unstructured tetrahedra grids. A fully structured grid best represents the flow physics, while the unstructured grid gives best geometrical flexibility. The multiblock grid employed is structured within a block, but completely unstructured on the block level. While a completely unstructured grid is not straightforward to parallelize, the above mentioned multiblock grid is inherently parallel, in particular for multiple instruction multiple datastream (MIMD) machines. In this paper guidelines are provided for setting up or modifying an existing sequential code so that a direct parallelization on a massively parallel system is possible. Results are presented for three parallel systems, namely the Intel hypercube, the Ncube hypercube, and the FPS 500 system. Some preliminary results for an 8K CM2 machine will also be mentioned. The code run is the two dimensional grid generation module of Grid, which is a general two dimensional and three dimensional grid generation code for complex geometries. A system of nonlinear Poisson equations is solved. This code is also a good testcase for complex fluid dynamics codes, since the same datastructures are used. All systems provided good speedups, but message passing MIMD systems seem to be best suited for large miltiblock applications.
Schmideder, Andreas; Severin, Timm Steffen; Cremer, Johannes Heinrich; Weuster-Botz, Dirk
2015-09-20
A pH-controlled parallel stirred-tank bioreactor system was modified for parallel continuous cultivation on a 10 mL-scale by connecting multichannel peristaltic pumps for feeding and medium removal with micro-pipes (250 μm inner diameter). Parallel chemostat processes with Escherichia coli as an example showed high reproducibility with regard to culture volume and flow rates as well as dry cell weight, dissolved oxygen concentration and pH control at steady states (n=8, coefficient of variation <5%). Reliable estimation of kinetic growth parameters of E. coli was easily achieved within one parallel experiment by preselecting ten different steady states. Scalability of milliliter-scale steady state results was demonstrated by chemostat studies with a stirred-tank bioreactor on a liter-scale. Thus, parallel and continuously operated stirred-tank bioreactors on a milliliter-scale facilitate timesaving and cost reducing steady state studies with microorganisms. The applied continuous bioreactor system overcomes the drawbacks of existing miniaturized bioreactors, like poor mass transfer and insufficient process control. Copyright © 2015 Elsevier B.V. All rights reserved.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright