Scalable problems and memory bounded speedup
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Ni, Lionel M.
1992-01-01
In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.
Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior
NASA Technical Reports Server (NTRS)
Gelenbe, Erol
1988-01-01
An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Computational efficiency of parallel combinatorial OR-tree searches
NASA Technical Reports Server (NTRS)
Li, Guo-Jie; Wah, Benjamin W.
1990-01-01
The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.
Boundedness and exponential convergence in a chemotaxis model for tumor invasion
NASA Astrophysics Data System (ADS)
Jin, Hai-Yang; Xiang, Tian
2016-12-01
We revisit the following chemotaxis system modeling tumor invasion {ut=Δu-∇ṡ(u∇v),x∈Ω,t>0,vt=Δv+wz,x∈Ω,t>0,wt=-wz,x∈Ω,t>0,zt=Δz-z+u,x∈Ω,t>0, in a smooth bounded domain Ω \\subset {{{R}}n}(n≥slant 1) with homogeneous Neumann boundary and initial conditions. This model was recently proposed by Fujie et al (2014 Adv. Math. Sci. Appl. 24 67-84) as a model for tumor invasion with the role of extracellular matrix incorporated, and was analyzed later by Fujie et al (2016 Discrete Contin. Dyn. Syst. 36 151-69), showing the uniform boundedness and convergence for n≤slant 3 . In this work, we first show that the {{L}∞} -boundedness of the system can be reduced to the boundedness of \\parallel u(\\centerdot,t){{\\parallel}{{L\\frac{n{4}+ɛ}}(Ω )}} for some ɛ >0 alone, and then, for n≥slant 4 , if the initial data \\parallel {{u}0}{{\\parallel}{{L\\frac{n{4}}}}} , \\parallel {{z}0}{{\\parallel}{{L\\frac{n{2}}}}} and \\parallel \
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
Hard tissue as a composite material. I - Bounds on the elastic behavior.
NASA Technical Reports Server (NTRS)
Katz, J. L.
1971-01-01
Recent determination of the elastic moduli of hydroxyapatite by ultrasonic methods permits a re-examination of the Voigt or parallel model of the elastic behavior of bone, as a two phase composite material. It is shown that such a model alone cannot be used to describe the behavior of bone. Correlative data on the elastic moduli of dentin, enamel and various bone samples indicate the existence of a nonlinear dependence of elastic moduli on composition of hard tissue. Several composite models are used to calculate the bounds on the elastic behavior of these tissues. The limitations of these models are described, and experiments to obtain additional critical data are discussed.
Parallel algorithms for the molecular conformation problem
NASA Astrophysics Data System (ADS)
Rajan, Kumar
Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.
1986-12-01
17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design
Townsend, James T; Eidels, Ami
2011-08-01
Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller's race model bound, Grice's bound, and Townsend's capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius-Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided.
Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem
NASA Astrophysics Data System (ADS)
Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa
A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.
Optimizing the Four-Index Integral Transform Using Data Movement Lower Bounds Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Rastello, Fabrice; Kowalski, Karol
The four-index integral transform is a fundamental and computationally demanding calculation used in many computational chemistry suites such as NWChem. It transforms a four-dimensional tensor from an atomic basis to a molecular basis. This transformation is most efficiently implemented as a sequence of four tensor contractions that each contract a four-dimensional tensor with a two-dimensional transformation matrix. Differing degrees of permutation symmetry in the intermediate and final tensors in the sequence of contractions cause intermediate tensors to be much larger than the final tensor and limit the number of electronic states in the modeled systems. Loop fusion, in conjunction withmore » tiling, can be very effective in reducing the total space requirement, as well as data movement. However, the large number of possible choices for loop fusion and tiling, and data/computation distribution across a parallel system, make it challenging to develop an optimized parallel implementation for the four-index integral transform. We develop a novel approach to address this problem, using lower bounds modeling of data movement complexity. We establish relationships between available aggregate physical memory in a parallel computer system and ineffective fusion configurations, enabling their pruning and consequent identification of effective choices and a characterization of optimality criteria. This work has resulted in the development of a significantly improved implementation of the four-index transform that enables higher performance and the ability to model larger electronic systems than the current implementation in the NWChem quantum chemistry software suite.« less
The role of bed-parallel slip in the development of complex normal fault zones
NASA Astrophysics Data System (ADS)
Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros
2017-04-01
Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.
Performance analysis of parallel branch and bound search with the hypercube architecture
NASA Technical Reports Server (NTRS)
Mraz, Richard T.
1987-01-01
With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
Lai, Victor K.; Lake, Spencer P.; Frey, Christina R.; Tranquillo, Robert T.; Barocas, Victor H.
2012-01-01
Fibrin and collagen, biopolymers occurring naturally in the body, are biomaterials commonly-used as scaffolds for tissue engineering. How collagen and fibrin interact to confer macroscopic mechanical properties in collagen-fibrin composite systems remains poorly understood. In this study, we formulated collagen-fibrin co-gels at different collagen-tofibrin ratios to observe changes in the overall mechanical behavior and microstructure. A modeling framework of a two-network system was developed by modifying our micro-scale model, considering two forms of interaction between the networks: (a) two interpenetrating but noninteracting networks (“parallel”), and (b) a single network consisting of randomly alternating collagen and fibrin fibrils (“series”). Mechanical testing of our gels show that collagen-fibrin co-gels exhibit intermediate properties (UTS, strain at failure, tangent modulus) compared to those of pure collagen and fibrin. The comparison with model predictions show that the parallel and series model cases provide upper and lower bounds, respectively, for the experimental data, suggesting that a combination of such interactions exists between the collagen and fibrin in co-gels. A transition from the series model to the parallel model occurs with increasing collagen content, with the series model best describing predominantly fibrin co-gels, and the parallel model best describing predominantly collagen co-gels. PMID:22482659
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Moin, Parviz
2016-01-01
This paper focuses on numerical and practical aspects associated with a parallel implementation of a two-layer zonal wall model for large-eddy simulation (LES) of compressible wall-bounded turbulent flows on unstructured meshes. A zonal wall model based on the solution of unsteady three-dimensional Reynolds-averaged Navier-Stokes (RANS) equations on a separate near-wall grid is implemented in an unstructured, cell-centered finite-volume LES solver. The main challenge in its implementation is to couple two parallel, unstructured flow solvers for efficient boundary data communication and simultaneous time integrations. A coupling strategy with good load balancing and low processors underutilization is identified. Face mapping and interpolation procedures at the coupling interface are explained in detail. The method of manufactured solution is used for verifying the correct implementation of solver coupling, and parallel performance of the combined wall-modeled LES (WMLES) solver is investigated. The method has successfully been applied to several attached and separated flows, including a transitional flow over a flat plate and a separated flow over an airfoil at an angle of attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Gyroscope precession along bound equatorial plane orbits around a Kerr black hole
NASA Astrophysics Data System (ADS)
Bini, Donato; Geralico, Andrea; Jantzen, Robert T.
2016-09-01
The precession of a test gyroscope along stable bound equatorial plane orbits around a Kerr black hole is analyzed, and the precession angular velocity of the gyro's parallel transported spin vector and the increment in the precession angle after one orbital period is evaluated. The parallel transported Marck frame which enters this discussion is shown to have an elegant geometrical explanation in terms of the electric and magnetic parts of the Killing-Yano 2-form and a Wigner rotation effect.
Dark/visible parallel universes and Big Bang nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Frederico, T.; Fuqua, J.
We develop a model for visible matter-dark matter interaction based on the exchange of a massive gray boson called herein the Mulato. Our model hinges on the assumption that all known particles in the visible matter have their counterparts in the dark matter. We postulate six families of particles five of which are dark. This leads to the unavoidable postulation of six parallel worlds, the visible one and five invisible worlds. A close study of big bang nucleosynthesis (BBN), baryon asymmetries, cosmic microwave background (CMB) bounds, galaxy dynamics, together with the Standard Model assumptions, help us to set a limitmore » on the mass and width of the new gauge boson. Modification of the statistics underlying the kinetic energy distribution of particles during the BBN is also discussed. The changes in reaction rates during the BBN due to a departure from the Debye-Hueckel electron screening model is also investigated.« less
Framework for analysis of guaranteed QOS systems
NASA Astrophysics Data System (ADS)
Chaudhry, Shailender; Choudhary, Alok
1997-01-01
Multimedia data is isochronous in nature and entails managing and delivering high volumes of data. Multiprocessors with their large processing power, vast memory, and fast interconnects, are an ideal candidate for the implementation of multimedia applications. Initially, multiprocessors were designed to execute scientific programs and thus their architecture was optimized to provide low message latency and efficiently support regular communication patterns. Hence, they have a regular network topology and most use wormhole routing. The design offers the benefits of a simple router, small buffer size, and network latency that is almost independent of path length. Among the various multimedia applications, video on demand (VOD) server is well-suited for implementation using parallel multiprocessors. Logical models for VOD servers are presently mapped onto multiprocessors. Our paper provides a framework for calculating bounds on utilization of system resources with which QoS parameters for each isochronous stream can be guaranteed. Effects of the architecture of multiprocessors, and efficiency of various local models and mapping on particular architectures can be investigated within our framework. Our framework is based on rigorous proofs and provides tight bounds. The results obtained may be used as the basis for admission control tests. To illustrate the versatility of our framework, we provide bounds on utilization for various logical models applied to mesh connected architectures for a video on demand server. Our results show that worm hole routing can lead to packets waiting for transmission of other packets that apparently share no common resources. This situation is analogous to head-of-the-line blocking. We find that the provision of multiple VCs per link and multiple flit buffers improves utilization (even under guaranteed QoS parameters). This analogous to parallel iterative matching.
Making almost commuting matrices commute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastings, Matthew B
Suppose two Hermitian matrices A, B almost commute ({parallel}[A,B]{parallel} {<=} {delta}). Are they close to a commuting pair of Hermitian matrices, A', B', with {parallel}A-A'{parallel},{parallel}B-B'{parallel} {<=} {epsilon}? A theorem of H. Lin shows that this is uniformly true, in that for every {epsilon} > 0 there exists a {delta} > 0, independent of the size N of the matrices, for which almost commuting implies being close to a commuting pair. However, this theorem does not specifiy how {delta} depends on {epsilon}. We give uniform bounds relating {delta} and {epsilon}. The proof is constructive, giving an explicit algorithm to construct A'more » and B'. We provide tighter bounds in the case of block tridiagonal and tridiagnonal matrices. Within the context of quantum measurement, this implies an algorithm to construct a basis in which we can make a projective measurement that approximately measures two approximately commuting operators simultaneously. Finally, we comment briefly on the case of approximately measuring three or more approximately commuting operators using POVMs (positive operator-valued measures) instead of projective measurements.« less
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol
NASA Technical Reports Server (NTRS)
Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.
2014-01-01
This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.
Low, R; Pothérat, A
2015-05-01
We investigate aspects of low-magnetic-Reynolds-number flow between two parallel, perfectly insulating walls in the presence of an imposed magnetic field parallel to the bounding walls. We find a functional basis to describe the flow, well adapted to the problem of finding the attractor dimension and which is also used in subsequent direct numerical simulation of these flows. For given Reynolds and Hartmann numbers, we obtain an upper bound for the dimension of the attractor by means of known bounds on the nonlinear inertial term and this functional basis for the flow. Three distinct flow regimes emerge: a quasi-isotropic three-dimensional (3D) flow, a nonisotropic 3D flow, and a 2D flow. We find the transition curves between these regimes in the space parametrized by Hartmann number Ha and attractor dimension d(att). We find how the attractor dimension scales as a function of Reynolds and Hartmann numbers (Re and Ha) in each regime. We also investigate the thickness of the boundary layer along the bounding wall and find that in all regimes this scales as 1/Re, independently of the value of Ha, unlike Hartmann boundary layers found when the field is normal to the channel. The structure of the set of least dissipative modes is indeed quite different between these two cases but the properties of turbulence far from the walls (smallest scales and number of degrees of freedom) are found to be very similar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less
Two-dimensional global hybrid simulation of pressure evolution and waves in the magnetosheath
NASA Astrophysics Data System (ADS)
Lin, Y.; Denton, R. E.; Lee, L. C.; Chao, J. K.
2001-06-01
A two-dimensional hybrid simulation is carried out for the global structure of the magnetosheath. Quasi-perpendicular magnetosonic/fast mode waves with large-amplitude in-phase oscillations of the magnetic field and the ion density are seen near the bow shock transition. Alfvén/ion-cyclotron waves are observed along the streamlines in the magnetosheath, and the wave power peaks in the middle magnetosheath. Antiphase oscillations in the magnetic field and density are present away from the shock transition. Transport ratio analysis suggests that these oscillations result from mirror mode waves. Since fluid simulations are currently best able to model the global magnetosphere and the pressure in the magnetosphere is inherently anisotropic (parallel pressure p∥≠perpendicular pressure p⊥), it is of some interest to see if a fluid model can be used to predict the anisotropic pressure evolution of a plasma. Here the predictions of double adiabatic theory, the bounded anisotropy model, and the double polytropic model are tested using the two-dimensional hybrid simulation of the magnetosheath. Inputs to the models from the hybrid simulation are the initial post bow shock pressures and the time-dependent density and magnetic field strength along streamlines of the plasma. The success of the models is evaluated on the basis of how well they predict the subsequent evolution of p∥ and p⊥. The bounded anisotropy model, which encorporates a bound on p⊥/p∥ due to the effect of ion cyclotron pitch angle scattering, does a very good job of predicting the evolution of p⊥ this is evidence that local transfer of energy due to waves is occurring. Further evidence is the positive identification of ion-cyclotron waves in the simulation. The lack of such a good prediction for the evolution of p∥ appears to be due to the model's lack of time dependence for the wave-particle interaction and its neglect of the parallel heat flux. Estimates indicate that these effects will be less significant in the real magnetosheath, though perhaps not negligible.
Modeling interface shear behavior of granular materials using micro-polar continuum approach
NASA Astrophysics Data System (ADS)
Ebrahimian, Babak; Noorzad, Ali; Alsaleh, Mustafa I.
2018-01-01
Recently, the authors have focused on the shear behavior of interface between granular soil body and very rough surface of moving bounding structure. For this purpose, they have used finite element method and a micro-polar elasto-plastic continuum model. They have shown that the boundary conditions assumed along the interface have strong influences on the soil behavior. While in the previous studies, only very rough bounding interfaces have been taken into account, the present investigation focuses on the rough, medium rough and relatively smooth interfaces. In this regard, plane monotonic shearing of an infinite extended narrow granular soil layer is simulated under constant vertical pressure and free dilatancy. The soil layer is located between two parallel rigid boundaries of different surface roughness values. Particular attention is paid to the effect of surface roughness of top and bottom boundaries on the shear behavior of granular soil layer. It is shown that the interaction between roughness of bounding structure surface and the rotation resistance of bounding grains can be modeled in a reasonable manner through considered Cosserat boundary conditions. The influence of surface roughness is investigated on the soil shear strength mobilized along the interface as well as on the location and evolution of shear localization formed within the layer. The obtained numerical results have been qualitatively compared with experimental observations as well as DEM simulations, and acceptable agreement is shown.
NASA Astrophysics Data System (ADS)
Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.
2018-04-01
Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.
Force Generation by Membrane-Associated Myosin-I
Pyrpassopoulos, Serapion; Arpağ, Göker; Feeser, Elizabeth A.; Shuman, Henry; Tüzel, Erkan; Ostap, E. Michael
2016-01-01
Vertebrate myosin-IC (Myo1c) is a type-1 myosin that links cell membranes to the cytoskeleton via its actin-binding motor domain and its phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2)-binding tail domain. While it is known that Myo1c bound to PtdIns(4,5)P2 in fluid-lipid bilayers can propel actin filaments in an unloaded motility assay, its ability to develop forces against external load on actin while bound to fluid bilayers has not been explored. Using optical tweezers, we measured the diffusion coefficient of single membrane-bound Myo1c molecules by force-relaxation experiments, and the ability of ensembles of membrane-bound Myo1c molecules to develop and sustain forces. To interpret our results, we developed a computational model that recapitulates the basic features of our experimental ensemble data and suggests that Myo1c ensembles can generate forces parallel to lipid bilayers, with larger forces achieved when the myosin works away from the plane of the membrane or when anchored to slowly diffusing regions. PMID:27156719
Blow-up of weak solutions to a chemotaxis system under influence of an external chemoattractant
NASA Astrophysics Data System (ADS)
Black, Tobias
2016-06-01
We study nonnnegative radially symmetric solutions of the parabolic-elliptic Keller-Segel whole space system {ut=Δu-∇ṡ(u∇v), x∈Rn,t>0,0=Δv+u+f(x), x∈Rn,t>0,u(x,0)=u0(x), x∈Rn, with prototypical external signal production f(x):={f0|x|-α,if |x|⩽R-ρ,0,if |x|⩾R+ρ, for R\\in (0,1) and ρ \\in ≤ft(0,\\frac{R}{2}\\right) , which is still integrable but not of class {{L}\\frac{n{2}+{δ0}}}≤ft({{{R}}n}\\right) for some {δ0}\\in ≤ft[0,1\\right) . For corresponding parabolic-parabolic Neumann-type boundary-value problems in bounded domains Ω , where f\\in {{L}\\frac{n{2}+{δ0}}}(Ω ){\\cap}{{C}α}(Ω ) for some {δ0}\\in (0,1) and α \\in (0,1) , it is known that the system does not emit blow-up solutions if the quantities \\parallel {{u}0}{{\\parallel}{{L\\frac{n{2}+{δ0}}}(Ω )}},\\parallel f{{\\parallel}{{L\\frac{n{2}+{δ0}}}(Ω )}} and \\parallel {{v}0}{{\\parallel}{{Lθ}(Ω )}} , for some θ >n , are all bounded by some \\varepsilon >0 small enough. We will show that whenever {{f}0}>\\frac{2n}α(n-2)(n-α ) and {{u}0}\\equiv {{c}0}>0 in \\overline{{{B}1}(0)} , a measure-valued global-in-time weak solution to the system above can be constructed which blows up immediately. Since these conditions are independent of R\\in (0,1) and c 0 > 0, we obtain a strong indication that in fact {δ0}=0 is critical for the existence of global bounded solutions under a smallness conditions as described above.
Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication
Azad, Ariful; Ballard, Grey; Buluc, Aydin; ...
2016-11-08
Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdös-Rényi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achievingmore » significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research.« less
Hines, Michael L; Eichner, Hubert; Schürmann, Felix
2008-08-01
Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
NASA Astrophysics Data System (ADS)
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.
An iterative method for systems of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1989-01-01
An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.
High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning
Sun, Wen; Patil, Sachin; Alterovitz, Ron
2015-01-01
As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
Stem thrust prediction model for W-K-M double wedge parallel expanding gate valves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldiwany, B.; Alvarez, P.D.; Wolfe, K.
1996-12-01
An analytical model for determining the required valve stem thrust during opening and closing strokes of W-K-M parallel expanding gate valves was developed as part of the EPRI Motor-Operated Valve Performance Prediction Methodology (EPRI MOV PPM) Program. The model was validated against measured stem thrust data obtained from in-situ testing of three W-K-M valves. Model predictions show favorable, bounding agreement with the measured data for valves with Stellite 6 hardfacing on the disks and seat rings for water flow in the preferred flow direction (gate downstream). The maximum required thrust to open and to close the valve (excluding wedging andmore » unwedging forces) occurs at a slightly open position and not at the fully closed position. In the nonpreferred flow direction, the model shows that premature wedging can occur during {Delta}P closure strokes even when the coefficients of friction at different sliding surfaces are within the typical range. This paper summarizes the model description and comparison against test data.« less
View looking SW at brick retaining wall running parallel to ...
View looking SW at brick retaining wall running parallel to Jones Street showing bricked up storage vaults - Central of Georgia Railway, Savannah Repair Shops & Terminal Facilities, Brick Storage Vaults under Jones Street, Bounded by West Broad, Jones, West Boundary & Hull Streets, Savannah, Chatham County, GA
Comments on Samal and Henderson: Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swain, M.J.
Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadomsky, O. N., E-mail: gadomsky@mail.ru; Shchukarev, I. A., E-mail: blacxpress@gmail.com
2016-08-15
It is shown that external optical radiation in the 450–1200 nm range can be efficiently transformed under the action of bounded light beams to a surface wave that propagates along the external and internal boundaries of a plane-parallel layer with a quasi-zero refractive index. Reflection regimes with complex and real angles of refraction in the layer are considered. The layer with a quasi-zero refractive index in this boundary problem is located on a highly reflective metal substrate; it is shown that the uniform low reflection of light is achieved in the wavelength range under study.
A unifying framework for rigid multibody dynamics and serial and parallel computational issues
NASA Technical Reports Server (NTRS)
Fijany, Amir; Jain, Abhinandan
1989-01-01
A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
Sutherland, John C.
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, John C.
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
Sutherland, John C
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonal orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configurations. Approaches for measuring the dichroic increment ratio with modern dichrometers are discussed. Copyright © 2017. Published by Elsevier Inc.
Collisionless slow shocks in magnetotail reconnection
NASA Astrophysics Data System (ADS)
Cremer, Michael; Scholer, Manfred
The kinetic structure of collisionless slow shocks in the magnetotail is studied by solving the Riemann problem of the collapse of a current sheet with a normal magnetic field component using 2-D hybrid simulations. The collapse results in a current layer with a hot isotropic distribution and backstreaming ions in a boundary layer. The lobe plasma outside and within the boundary layer exhibits a large perpendicular to parallel temperature anisotropy. Waves in both regions propagate parallel to the magnetic field. In a second experiment a spatially limited high density beam is injected into a low beta background plasma and the subsequent wave excitation is studied. A model for slow shocks bounding the reconnection layer in the magnetotail is proposed where backstreaming ions first excite obliquely propagating waves by the electromagnetic ion/ion cyclotron instability, which lead to perpendicular heating. The T⊥/T∥ temperature anisotropy subsequently excites parallel propagating Alfvén ion cyclotron waves, which are convected into the slow shock and are refracted in the downstream region.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.
2013-11-07
Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors inmore » equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.« less
Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions
NASA Astrophysics Data System (ADS)
Buddala, Santhoshi Snigdha
Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.; Lin, J. C.
2015-12-01
Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).
ERIC Educational Resources Information Center
Goldhaber, Dan; Long, Mark C.; Person, Ann E.; Rooklyn, Jordan
2017-01-01
We investigate factors influencing student sign-ups for Washington State's College Bound Scholarship (CBS) program. We find a substantial share of eligible middle school students fail to sign the CBS, forgoing college financial aid. Student characteristics associated with signing the scholarship parallel characteristics of low-income students who…
A novel, bounding gait in swimming turtles: implications for aquatic locomotor diversity.
Mayerl, Christopher J; Blob, Richard W
2017-10-15
Turtles are an iconic lineage in studies of animal locomotion, typifying the use of slow, alternating footfalls during walking. Alternating movements of contralateral limbs are also typical during swimming gaits for most freshwater turtles. Here, we report a novel gait in turtles, in which the pleurodire Emydura subglobosa swims using a bounding gait that coordinates bilateral protraction of both forelimbs with bilateral retraction of both hindlimbs. Use of this bounding gait is correlated with increased limb excursion and decreased stride frequency, but not increased velocity when compared with standard swimming strokes. Bounding by E. subglobosa provides a second example of a non-mammalian lineage that can use bounding gaits, and may give insight into the evolution of aquatic flapping. Parallels in limb muscle fascicle properties between bounding turtles and crocodylids suggest a possible musculoskeletal mechanism underlying the use of bounding gaits in particular lineages. © 2017. Published by The Company of Biologists Ltd.
Design of a bounded wave EMP (Electromagnetic Pulse) simulator
NASA Astrophysics Data System (ADS)
Sevat, P. A. A.
1989-06-01
Electromagnetic Pulse (EMP) simulators are used to simulate the EMP generated by a nuclear weapon and to harden equipment against the effects of EMP. At present, DREO has a 1 m EMP simulator for testing computer terminal size equipment. To develop the R and D capability for testing larger objects, such as a helicopter, a much bigger threat level facility is required. This report concerns the design of a bounded wave EMP simulator suitable for testing large size equipment. Different types of simulators are described and their pros and cons are discussed. A bounded wave parallel plate type simulator is chosen for it's efficiency and the least environmental impact. Detailed designs are given for 6 m and 10 m parallel plate type wire grid simulators. Electromagnetic fields inside and outside the simulators are computed. Preliminary specifications for a pulse generator required for the simulator are also given. Finally, the electromagnetic fields radiated from the simulator are computed and discussed.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
Volumes and intrinsic diameters of hypersurfaces
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-09-01
We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
Memory-Scalable GPU Spatial Hierarchy Construction.
Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D
2011-04-01
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
A model for cytoplasmic rheology consistent with magnetic twisting cytometry.
Butler, J P; Kelly, S M
1998-01-01
Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Soliton interactions and complexes for coupled nonlinear Schrödinger equations.
Jiang, Yan; Tian, Bo; Liu, Wen-Jun; Sun, Kun; Li, Min; Wang, Pan
2012-03-01
Under investigation in this paper are the coupled nonlinear Schrödinger (CNLS) equations, which can be used to govern the optical-soliton propagation and interaction in such optical media as the multimode fibers, fiber arrays, and birefringent fibers. By taking the 3-CNLS equations as an example for the N-CNLS ones (N≥3), we derive the analytic mixed-type two- and three-soliton solutions in more general forms than those obtained in the previous studies with the Hirota method and symbolic computation. With the choice of parameters for those soliton solutions, soliton interactions and complexes are investigated through the asymptotic and graphic analysis. Soliton interactions and complexes with the bound dark solitons in a mode or two modes are observed, including that (i) the two bright solitons display the breatherlike structures while the two dark ones stay parallel, (ii) the two bright and dark solitons all stay parallel, and (iii) the states of the bound solitons change from the breatherlike structures to the parallel one even with the distance between those solitons smaller than that before the interaction with the regular one soliton. Asymptotic analysis is also used to investigate the elastic and inelastic interactions between the bound solitons and the regular one soliton. Furthermore, some discussions are extended to the N-CNLS equations (N>3). Our results might be helpful in such applications as the soliton switch, optical computing, and soliton amplification in the nonlinear optics.
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego; Hauck, Cory
2012-10-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (χ/χ˜10^10 in fusion plasmas). Recently, a Lagrangian Green's function approach, developed for the purely parallel transport case,footnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011)^,footnotetextD. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, 19, 056112 (2012) has been extended to the anisotropic transport case in the tokamak-ordering limit with constant density.footnotetextL. Chac'on, D. del-Castillo-Negrete, C. Hauck, JCP, submitted (2012) An operator-split algorithm is proposed that allows one to treat Eulerian and Lagrangian components separately. The approach is shown to feature bounded numerical errors for arbitrary χ/χ ratios, which renders it asymptotic-preserving. In this poster, we will present the generalization of the Lagrangian approach to arbitrary magnetic fields. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.
Solving very large, sparse linear systems on mesh-connected parallel computers
NASA Technical Reports Server (NTRS)
Opsahl, Torstein; Reif, John
1987-01-01
The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
A discrete decentralized variable structure robotic controller
NASA Technical Reports Server (NTRS)
Tumeh, Zuheir S.
1989-01-01
A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.
Premnath, Kannan N; Pattison, Martin J; Banerjee, Sanjoy
2009-02-01
In this paper, we present a framework based on the generalized lattice Boltzmann equation (GLBE) using multiple relaxation times with forcing term for eddy capturing simulation of wall-bounded turbulent flows. Due to its flexibility in using disparate relaxation times, the GLBE is well suited to maintaining numerical stability on coarser grids and in obtaining improved solution fidelity of near-wall turbulent fluctuations. The subgrid scale (SGS) turbulence effects are represented by the standard Smagorinsky eddy viscosity model, which is modified by using the van Driest wall-damping function to account for reduction of turbulent length scales near walls. In order to be able to simulate a wider class of problems, we introduce forcing terms, which can represent the effects of general nonuniform forms of forces, in the natural moment space of the GLBE. Expressions for the strain rate tensor used in the SGS model are derived in terms of the nonequilibrium moments of the GLBE to include such forcing terms, which comprise a generalization of those presented in a recent work [Yu, Comput. Fluids 35, 957 (2006)]. Variable resolutions are introduced into this extended GLBE framework through a conservative multiblock approach. The approach, whose optimized implementation is also discussed, is assessed for two canonical flow problems bounded by walls, viz., fully developed turbulent channel flow at a shear or friction Reynolds number (Re) of 183.6 based on the channel half-width and three-dimensional (3D) shear-driven flows in a cubical cavity at a Re of 12 000 based on the side length of the cavity. Comparisons of detailed computed near-wall turbulent flow structure, given in terms of various turbulence statistics, with available data, including those from direct numerical simulations (DNS) and experiments showed good agreement. The GLBE approach also exhibited markedly better stability characteristics and avoided spurious near-wall turbulent fluctuations on coarser grids when compared with the single-relaxation-time (SRT)-based approach. Moreover, its implementation showed excellent parallel scalability on a large parallel cluster with over a thousand processors.
Interactive collision detection for deformable models using streaming AABBs.
Zhang, Xinyu; Kim, Young J
2007-01-01
We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.
Runtime verification of embedded real-time systems.
Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg
We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Stratovolcano stability assessment methods and results from Citlaltepetl, Mexico
Zimbelman, D.R.; Watters, R.J.; Firth, I.R.; Breit, G.N.; Carrasco-Nunez, Gerardo
2004-01-01
Citlaltépetl volcano is the easternmost stratovolcano in the Trans-Mexican Volcanic Belt. Situated within 110 km of Veracruz, it has experienced two major collapse events and, subsequent to its last collapse, rebuilt a massive, symmetrical summit cone. To enhance hazard mitigation efforts we assess the stability of Citlaltépetl's summit cone, the area thought most likely to fail during a potential massive collapse event. Through geologic mapping, alteration mineralogy, geotechnical studies, and stability modeling we provide important constraints on the likelihood, location, and size of a potential collapse event. The volcano's summit cone is young, highly fractured, and hydrothermally altered. Fractures are most abundant within 5–20-m wide zones defined by multiple parallel to subparallel fractures. Alteration is most pervasive within the fracture systems and includes acid sulfate, advanced argillic, argillic, and silicification ranks. Fractured and altered rocks both have significantly reduced rock strengths, representing likely bounding surfaces for future collapse events. The fracture systems and altered rock masses occur non-uniformly, as an orthogonal set with N–S and E–W trends. Because these surfaces occur non-uniformly, hazards associated with collapse are unevenly distributed about the volcano. Depending on uncertainties in bounding surfaces, but constrained by detailed field studies, potential failure volumes are estimated to range between 0.04–0.5 km3. Stability modeling was used to assess potential edifice failure events. Modeled failure of the outer portion of the cone initially occurs as an "intact block" bounded by steeply dipping joints and outwardly dipping flow contacts. As collapse progresses, more of the inner cone fails and the outer "intact" block transforms into a collection of smaller blocks. Eventually, a steep face develops in the uppermost and central portion of the cone. This modeled failure morphology mimics collapse amphitheaters
On k-ary n-cubes: Theory and applications
NASA Technical Reports Server (NTRS)
Mao, Weizhen; Nicol, David M.
1994-01-01
Many parallel processing networks can be viewed as graphs called k-ary n-cubes, whose special cases include rings, hypercubes and toruses. In this paper, combinatorial properties of k-ary n-cubes are explored. In particular, the problem of characterizing the subgraph of a given number of nodes with the maximum edge count is studied. These theoretical results are then used to compute a lower bounding function in branch-and-bound partitioning algorithms and to establish the optimality of some irregular partitions.
Makran Mountain Range, Indus River Valley, Pakistan, India
NASA Technical Reports Server (NTRS)
1984-01-01
The enormous geologic pressures exerted by continental drift can be very well illustrated by the long northward curving parallel folded mountain ridges and valleys of the coastal Makran Range of Pakistan (27.0N, 66.0E). As a result of the collision of the northward bound Indian sub-continent into the Asian Continent, the east/west parallel range has been bent in a great northward arc and forming the Indus River valley at the interface of the collision.
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Multiprogramming performance degradation - Case study on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Dimpsey, R. T.; Iyer, R. K.
1989-01-01
The performance degradation due to multiprogramming overhead is quantified for a parallel-processing machine. Measurements of real workloads were taken, and it was found that there is a moderate correlation between the completion time of a program and the amount of system overhead measured during program execution. Experiments in controlled environments were then conducted to calculate a lower bound on the performance degradation of parallel jobs caused by multiprogramming overhead. The results show that the multiprogramming overhead of parallel jobs consumes at least 4 percent of the processor time. When two or more serial jobs are introduced into the system, this amount increases to 5.3 percent
A massively asynchronous, parallel brain.
Zeki, Semir
2015-05-19
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain.
The interaction of moderately strong shock waves with thick perforated walls of low porosity
NASA Technical Reports Server (NTRS)
Grant, D. J.
1972-01-01
A theoretical prediction is given of the flow through thick perforated walls of low porosity resulting from the impingement of a moderately strong traveling shock wave. The model was a flat plate positioned normal to the direction of the flow. Holes bored in the plate parallel to the direction of the flow provided nominal hole length-to-diameter ratios of 10:1 and an axial porosity of 25 percent of the flow channel cross section. The flow field behind the reflected shock wave was assumed to behave as a reservoir producing a quasi-steady duct flow through the model. Rayleigh and Fanno duct flow theoretical computations for each of three possible auxiliary wave patterns that can be associated with the transmitted shock (to satisfy contact surface compatibility) were used to provide bounding solutions as an alternative to the more complex influence coefficients method. Qualitative and quantitative behavior was verified in a 1.5- by 2.0-in. helium shock tube. High speed Schlieren photography, piezoelectric pressure-time histories, and electronic-counter wave speed measurements were used to assess the extent of correlation with the theoretical flow models. Reduced data indicated the adequacy of the bounding theory approach to predict wave phenomena and quantitative response.
NASA Astrophysics Data System (ADS)
Campos-Enriquez, J. O.; Zambrana Arias, X.; Keppie, D.; Ramón Márquez, V.
2012-12-01
Regional scale models have been proposed for the Nicaraguan depression: 1) parallel rifting of the depression (and volcanic front) due to roll back of the underlying subducted Cocos plate; 2) right-lateral strike-slip faulting parallel to the depression and locally offset by pull-apart basins; 3) right-lateral strike-slip faulting parallel to the depression and offset by left-lateral transverse or bookshelf faults. At an intermediate scale, Funk et al. (2011) interpret the depression as half graben type structures. The E-W Airport graben lies in the southeastern part of the Managua graben (Nicaragua), across which the active Central American volcanic arc is dextrally offset, possibly the result of a subducted transform fault where the subduction angle changes. The Managua graben lies within the late Quaternary Nicaragua depression produced by backarc rifting during roll back of the Middle American Trench. The Managua graben formed as a pull-apart rift associated with dextral bookshelf faulting during dextral shear between the forearc and arc and is the locus of two historical, large earthquakes that destroyed the city of Managua. In order to asses future earthquake risk, four E-W gravity and magnetic profiles were undertaken to determine its structure across the Airport graben, which is bounded by the Cofradia and Airport fault zones, to the east and west, respectively. These data indicated the presence of a series of normal faults bounding down-thrown and up-thrown fault blocks and a listric normal fault, Sabana Grande Fault. The models imply that this area has been subjected to tectonic extension. These faults appear to be part of the bookshelf suite and will probably be the locus of future earthquakes, which could destroy the airport and surrounding part of Managua. Three regional SW-NE gravity profiles running from the Pacific Ocean up to the Caribbean See indicate a change in crustal structure: from north to south the crust thins. According to these regional crustal models the offset observed in the Volcanic Front around the Nicaragua Lake is associated with a weakness zone related with: 1) this N-S change in crustal structure, 2) to the subduction angle of the Cocos plate, and 3) to the distance to the Middle America Trench (i.e. the location of the mantle wedge). As mentioned above a subducted transform fault might have given rise to this crustal discontinuity.
CPMIP: measurements of real computational performance of Earth system models in CMIP6
NASA Astrophysics Data System (ADS)
Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett
2017-01-01
A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
Radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Tiwari, Surendra N.
1992-01-01
Basic formulations, analyses, and numerical procedures are presented to study radiative interactions in gray as well as nongray gases under different physical and flow conditions. After preliminary fluid-dynamical considerations, essential governing equations for radiative transport are presented that are applicable under local and nonlocal thermodynamic equilibrium conditions. Auxiliary relations for relaxation times and spectral absorption models are also provided. For specific applications, several simple gaseous systems are analyzed. The first system considered consists of a gas bounded by two parallel plates having the same temperature. Within the gas there is a uniform heat source per unit volume. For this system, both vibrational nonequilibrium effects and radiation conduction interactions are studied. The second system consists of fully developed laminar flow and heat transfer in a parallel plate duct under the boundary condition of a uniform surface heat flux. For this system, effects of gray surface emittance are studied. With the single exception of a circular geometry, the third system is considered identical to the second system. Here, the influence of nongray walls is also studied.
Xu, Jing; Marsac, Rémi; Costa, Dominique; Cheng, Wei; Wu, Feng; Boily, Jean-François; Hanna, Khalil
2017-08-01
The emergence of antibiotic and anti-inflammatory agents in aquatic and terrestrial systems is becoming a serious threat to human and animal health worldwide. Because pharmaceutical compounds rarely exist individually in nature, interactions between various compounds can have unforeseen effects on their binding to mineral surfaces. This work demonstrates this important possibility for the case of two typical antibiotic and anti-inflammatory agents (nalidixic acid (NA) and niflumic acid (NFA)) bound at goethite (α-FeOOH) used as a model mineral surface. Our multidisciplinary study, which makes use of batch sorption experiments, vibration spectroscopy and periodic density functional theory calculations, reveals enhanced binding of the otherwise weakly bound NFA caused by unforeseen intermolecular interactions with mineral-bound NA. This enhancement is ascribed to the formation of a NFA-NA dimer whose energetically favored formation (-0.5 eV compared to free molecules) is predominantly driven by van der Waals interactions. A parallel set of efforts also showed that no cobinding occurred with sulfamethoxazole (SMX) because of the lack of molecular interactions with coexisting contaminants. As such, this article raises the importance of recognizing drug cobinding, and lack of cobinding, for predicting and developing policies on the fate of complex mixtures of antibiotics and anti-inflammatory agents in nature.
Distributed deformation and block rotation in 3D
NASA Technical Reports Server (NTRS)
Scotti, Oona; Nur, Amos; Estevez, Raul
1990-01-01
The authors address how block rotation and complex distributed deformation in the Earth's shallow crust may be explained within a stationary regional stress field. Distributed deformation is characterized by domains of sub-parallel fault-bounded blocks. In response to the contemporaneous activity of neighboring domains some domains rotate, as suggested by both structural and paleomagnetic evidence. Rotations within domains are achieved through the contemporaneous slip and rotation of the faults and of the blocks they bound. Thus, in regions of distributed deformation, faults must remain active in spite of their poor orientation in the stress field. The authors developed a model that tracks the orientation of blocks and their bounding faults during rotation in a 3D stress field. In the model, the effective stress magnitudes of the principal stresses (sigma sub 1, sigma sub 2, and sigma sub 3) are controlled by the orientation of fault sets in each domain. Therefore, adjacent fault sets with differing orientations may be active and may display differing faulting styles, and a given set of faults may change its style of motion as it rotates within a stationary stress regime. The style of faulting predicted by the model depends on a dimensionless parameter phi = (sigma sub 2 - sigma sub 3)/(sigma sub 1 - sigma sub 3). Thus, the authors present a model for complex distributed deformation and complex offset history requiring neither geographical nor temporal changes in the stress regime. They apply the model to the Western Transverse Range domain of southern California. There, it is mechanically feasible for blocks and faults to have experienced up to 75 degrees of clockwise rotation in a phi = 0.1 strike-slip stress regime. The results of the model suggest that this domain may first have accommodated deformation along preexisting NNE-SSW faults, reactivated as normal faults. After rotation, these same faults became strike-slip in nature.
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
Searching for an Axis-Parallel Shoreline
NASA Astrophysics Data System (ADS)
Langetepe, Elmar
We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.
Center for Parallel Optimization
1993-09-30
BOLLING AFB DC 20332-0001 _ii _ 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION/ AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE APPROVED FOR PUBLIC RELEASE...Machines Corporation, March 16-19, 1993 , A Branch- and-Bound Method for Mixed Integer Programming on the CM-.5 "* Dr. Roberto Musmanno, University of
A massively asynchronous, parallel brain
Zeki, Semir
2015-01-01
Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871
Darcy Flow in a Wavy Channel Filled with a Porous Medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Donald D; Ogretim, Egemen; Bromhal, Grant S
2013-05-17
Flow in channels bounded by wavy or corrugated walls is of interest in both technological and geological contexts. This paper presents an analytical solution for the steady Darcy flow of an incompressible fluid through a homogeneous, isotropic porous medium filling a channel bounded by symmetric wavy walls. This packed channel may represent an idealized packed fracture, a situation which is of interest as a potential pathway for the leakage of carbon dioxide from a geological sequestration site. The channel walls change from parallel planes, to small amplitude sine waves, to large amplitude nonsinusoidal waves as certain parameters are increased. Themore » direction of gravity is arbitrary. A plot of piezometric head against distance in the direction of mean flow changes from a straight line for parallel planes to a series of steeply sloping sections in the reaches of small aperture alternating with nearly constant sections in the large aperture bulges. Expressions are given for the stream function, specific discharge, piezometric head, and pressure.« less
NASA Astrophysics Data System (ADS)
Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.
2017-01-01
Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-bound rivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized, three-dimensional, turbulence-resolving model was developed to study the flow structures along lateral separation zones located in two pools along the Colorado River in Marble Canyon. The model employs the detached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacing in the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flow measurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-point validation using a number of skill metrics, often employed in hydrological research, is proposed here for fluvial modeling. The validation results show predictive capabilities of the DES model. The model reproduces the pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position of the primary and secondary eddy cells, and return current. The lateral recirculation zone is open, having continuous import of fluid upstream of the point of reattachment and export by the recirculation return current downstream of the point of separation. Differences in magnitude and direction of near-bed and near-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between the recirculation return current and the main flow is dynamic, with large temporal changes in flow direction and magnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shear layer becoming three-dimensional without preferred orientation downstream.
NASA Astrophysics Data System (ADS)
Massey, M. A.; Moecher, D. P.
2006-12-01
One widely cited model for Appalachian orogenesis in New England invokes the tripartite Alpine sequence of nappe folding/thrusting, back-folding, and doming to explain regional and outcrop-scale structural relationships. Recent work suggests lateral extrusion driven by oblique convergence as an important mechanism responsible for structures, fabrics, and mineral assemblages in the Bronson Hill terrane (BHT) of Connecticut and Massachusetts. Just as the Alpine model has evolved to incorporate elements of lateral extrusion, and syn- to post-orogenic collapse, we propose similar revisions for southern New England. Detailed mapping and structural analysis of the W- to WNW-dipping BHT in south-central MA reveals: (1) a sub-vertical, transpressional dextral thrust high strain zone (Bonemill/Conant Brook shear zone) bounding the eastern margin of the Monson granitic gneiss dome (MG) with two modes of Sil+Qtz+Fs lineations plunging WNW and SSW; (2) a moderate to steeply-dipping sinistral high strain zone bounding the western margin of the MG with WNW- and SSW-plunging Ms+Qtz+Grt lineations; (3) an apparently random arrangement of gneiss, s and s-l tectonites, protomylonites, and mylonites composing the body of the MG, also containing WNW and SSW Qtz+Fs lineations. Extrapolation to a regional scale from central CT to northern MA indicates: (1) a gradual increase in s-l and l-s tectonites to the north from predominantly s-tectonites in central CT; (2) transition of lineation plunge from NW in central CT to bimodal WNW and SSW distribution to the north; (3) amphibolite facies metamorphism was pre- to synkinematic with respect to deformation. We propose that these observations may be accounted for by transpression and extrusion, rather than discreet phases of deformation invoked by the traditional three-stage model. Synchronous operation of high strain zones bounding the MG accommodated northward orogen-parallel extrusion in addition to a component of orogen-normal shortening and sub-vertical extrusion, thus constituting bulk heterogeneous flow. Existing geochronology/thermochronology constrains deformation to the late Paleozoic Alleghanian orogeny. The consistency in timing and similarity in style with deformation associated with the Pelham dome demonstrate the significance of orogen-parallel flow in the BHT. We go further by presenting a working late Paleozoic tectonic model incorporating data from this study with existing contributions from other workers in southern New England. This model involves oblique convergence and underthrusting of Avalon in the late Mississippian/early Pennsylvanian continuing into and throughout most of the Permian. Synorogenic compressional and extensional structures from upper amphibolite to greenschist facies are explained by progressive deformation, including extrusion, orogenic collapse, and wedging, throughout an evolving metamorphic gradient.
NASA Astrophysics Data System (ADS)
Gannot, Israel; Bonner, Robert F.; Gannot, Gallya; Fox, Philip C.; You, Joon S.; Waynant, Ronald W.; Gandjbakhche, Amir H.
1997-08-01
A series of fluorescent surface images were obtained from physical models of localized fluorophores embedded at various depths and separations in tissue phantoms. Our random walk theory was applied to create an analytical model of multiple flurophores embedded in tissue-like phantom. Using this model, from acquired set of surface images, the location of the fluorophores was reconstructed and compared it to their known 3-D distributions. A good correlation was found, and the ability to resolve fluorophores as a function of depth and separation was determined. In parallel in in-vitro study, specific coloring of sections of minor salivary glands was also demonstrated. These results demonstrate the possibility of using inverse methods to reconstruct unknown locations and concentrations of optical probes specifically bound to infiltrating lymphocytes in minor salivary glands of patients with Sjogren's syndrome.
Calculation of Crystallographic Texture of BCC Steels During Cold Rolling
NASA Astrophysics Data System (ADS)
Das, Arpan
2017-05-01
BCC alloys commonly tend to develop strong fibre textures and often represent as isointensity diagrams in φ 1 sections or by fibre diagrams. Alpha fibre in bcc steels is generally characterised by <110> crystallographic axis parallel to the rolling direction. The objective of present research is to correlate carbon content, carbide dispersion, rolling reduction, Euler angles (ϕ) (when φ 1 = 0° and φ 2 = 45° along alpha fibre) and the resulting alpha fibre texture orientation intensity. In the present research, Bayesian neural computation has been employed to correlate these and compare with the existing feed-forward neural network model comprehensively. Excellent match to the measured texture data within the bounding box of texture training data set has been already predicted through the feed-forward neural network model by other researchers. Feed-forward neural network prediction outside the bounds of training texture data showed deviations from the expected values. Currently, Bayesian computation has been similarly applied to confirm that the predictions are reasonable in the context of basic metallurgical principles, and matched better outside the bounds of training texture data set than the reported feed-forward neural network. Bayesian computation puts error bars on predicted values and allows significance of each individual parameters to be estimated. Additionally, it is also possible by Bayesian computation to estimate the isolated influence of particular variable such as carbon concentration, which exactly cannot in practice be varied independently. This shows the ability of the Bayesian neural network to examine the new phenomenon in situations where the data cannot be accessed through experiments.
The immunity-related GTPase Irga6 dimerizes in a parallel head-to-head fashion.
Schulte, Kathrin; Pawlowski, Nikolaus; Faelber, Katja; Fröhlich, Chris; Howard, Jonathan; Daumke, Oliver
2016-03-02
The immunity-related GTPases (IRGs) constitute a powerful cell-autonomous resistance system against several intracellular pathogens. Irga6 is a dynamin-like protein that oligomerizes at the parasitophorous vacuolar membrane (PVM) of Toxoplasma gondii leading to its vesiculation. Based on a previous biochemical analysis, it has been proposed that the GTPase domains of Irga6 dimerize in an antiparallel fashion during oligomerization. We determined the crystal structure of an oligomerization-impaired Irga6 mutant bound to a non-hydrolyzable GTP analog. Contrary to the previous model, the structure shows that the GTPase domains dimerize in a parallel fashion. The nucleotides in the center of the interface participate in dimerization by forming symmetric contacts with each other and with the switch I region of the opposing Irga6 molecule. The latter contact appears to activate GTP hydrolysis by stabilizing the position of the catalytic glutamate 106 in switch I close to the active site. Further dimerization contacts involve switch II, the G4 helix and the trans stabilizing loop. The Irga6 structure features a parallel GTPase domain dimer, which appears to be a unifying feature of all dynamin and septin superfamily members. This study contributes important insights into the assembly and catalytic mechanisms of IRG proteins as prerequisite to understand their anti-microbial action.
2014-01-01
Co-doped SnO2 thin films were grown by sputtering technique on SiO2/Si(001) substrates at room temperature, and then, thermal treatments with and without an applied magnetic field (HTT) were performed in vacuum at 600°C for 20 min. HTT was applied parallel and perpendicular to the substrate surface. Magnetic M(H) measurements reveal the coexistence of a strong antiferromagnetic (AFM) signal and a ferromagnetic (FM) component. The AFM component has a Néel temperature higher than room temperature, the spin axis lies parallel to the substrate surface, and the highest magnetic moment m =7 μB/Co at. is obtained when HTT is applied parallel to the substrate surface. Our results show an enhancement of FM moment per Co+2 from 0.06 to 0.42 μB/Co at. for the sample on which HTT was applied perpendicular to the surface. The FM order is attributed to the coupling of Co+2 ions through electrons trapped at the site of oxygen vacancies, as described by the bound magnetic polaron model. Our results suggest that FM order is aligned along [101] direction of Co-doped SnO2 nanocrystals, which is proposed to be the easy magnetization axis. PMID:25489286
Nonadiabatic electron response in the Hasegawa-Wakatani equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltzfus-Dueck, T.; Scott, B. D.; Krommes, J. A.
2013-08-15
Tokamak edge turbulence is strongly influenced by parallel electron physics, which relaxes density and potential fluctuations towards electron adiabatic response. Beginning with the paradigmatic Hasegawa-Wakatani equations (HWEs) for resistive tokamak edge turbulence, a unique decomposition of the electric potential (φ) into adiabatic (a) and nonadiabatic (b) portions is derived, based on the requirement that a neither drive nor respond to the parallel current j{sub ∥}. The form of the decomposition clarifies that, at perpendicular scales large relative to the sound radius, the electron adiabatic response controls the nonzonal φ, not the fluctuating density n. Simple energy balance arguments allow onemore » to rigorously bound the ratio of rms nonzonal nonadiabatic fluctuations (b(tilde sign)) relative to adiabatic ones (ã). The role of the vorticity nonlinearity in transferring energy between adiabatic and nonadiabatic fluctuations aids intuitive understanding of self-sustained turbulence in the HWEs. When the normalized parallel resistivity is weak, b(tilde sign) becomes effectively slaved, allowing the reduction to an approximate one-field model that remains valid for strong turbulence. In addition to guiding physical intuition, the one-field reduction should greatly ease further analytical manipulations. Direct numerical simulation of the 2D HWEs confirms the convergence of the asymptotic formula for b(tilde sign)« less
Size and Shape of the Distant Magnetotail
NASA Technical Reports Server (NTRS)
Sibeck, D.G.; Lin, R.-Q.
2014-01-01
We employ a global magnetohydrodynamic model to study the effects of the interplanetary magnetic field (IMF) strength and direction upon the cross-section of the magnetotail at lunar distances. The anisotropic pressure of draped magnetosheath magnetic field lines and the inclusion of a reconnection-generated standing slow mode wave fan bounded by a rotational discontinuity within the definition of the magnetotail result in cross-sections elongated in the direction parallel to the component of the IMF in the plane perpendicular to the Sun-Earth line. Tilted cross-tail plasma sheets separate the northern and southern lobes within these cross-sections. Greater fast mode speeds perpendicular than parallel to the draped magnetos heath magnetic field lines result in greater distances to the bow shock in the direction perpendicular than parallel to the component of the IMF in the plane transverse to the Sun-Earth line. The magnetotail cross-section responds rapidly to reconnected magnetic field lines requires no more than the magnetosheath convection time to appear at any distance downstream, and further adjustments of the cross-section in response to the anisotropic pressures of the draped magnetic field lines require no more than 10-20 minutes. Consequently for typical ecliptic IMF orientations and strengths, the magnetotail cross-section is oblate while the bow shock is prolate.
The generalized accessibility and spectral gap of lower hybrid waves in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Hironori
1994-03-01
The generalized accessibility of lower hybrid waves, primarily in the current drive regime of tokamak plasmas, which may include shifting, either upward or downward, of the parallel refractive index (n{sub {parallel}}), is investigated, based upon a cold plasma dispersion relation and various geometrical constraint (G.C.) relations imposed on the behavior of n{sub {parallel}}. It is shown that n{sub {parallel}} upshifting can be bounded and insufficient to bridge a large spectral gap to cause wave damping, depending upon whether the G.C. relation allows the oblique resonance to occur. The traditional n{sub {parallel}} upshifting mechanism caused by the pitch angle of magneticmore » field lines is shown to lead to contradictions with experimental observations. An upshifting mechanism brought about by the density gradient along field lines is proposed, which is not inconsistent with experimental observations, and provides plausible explanations to some unresolved issues of lower hybrid wave theory, including generation of {open_quote}seed electrons.{close_quote}« less
Parallelized reliability estimation of reconfigurable computer networks
NASA Technical Reports Server (NTRS)
Nicol, David M.; Das, Subhendu; Palumbo, Dan
1990-01-01
A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.
Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver
NASA Technical Reports Server (NTRS)
Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)
2002-01-01
The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
Parallel Geospatial Data Management for Multi-Scale Environmental Data Analysis on GPUs
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, J.; Wei, Y.
2013-12-01
As the spatial and temporal resolutions of Earth observatory data and Earth system simulation outputs are getting higher, in-situ and/or post- processing such large amount of geospatial data increasingly becomes a bottleneck in scientific inquires of Earth systems and their human impacts. Existing geospatial techniques that are based on outdated computing models (e.g., serial algorithms and disk-resident systems), as have been implemented in many commercial and open source packages, are incapable of processing large-scale geospatial data and achieve desired level of performance. In this study, we have developed a set of parallel data structures and algorithms that are capable of utilizing massively data parallel computing power available on commodity Graphics Processing Units (GPUs) for a popular geospatial technique called Zonal Statistics. Given two input datasets with one representing measurements (e.g., temperature or precipitation) and the other one represent polygonal zones (e.g., ecological or administrative zones), Zonal Statistics computes major statistics (or complete distribution histograms) of the measurements in all regions. Our technique has four steps and each step can be mapped to GPU hardware by identifying its inherent data parallelisms. First, a raster is divided into blocks and per-block histograms are derived. Second, the Minimum Bounding Boxes (MBRs) of polygons are computed and are spatially matched with raster blocks; matched polygon-block pairs are tested and blocks that are either inside or intersect with polygons are identified. Third, per-block histograms are aggregated to polygons for blocks that are completely within polygons. Finally, for blocks that intersect with polygon boundaries, all the raster cells within the blocks are examined using point-in-polygon-test and cells that are within polygons are used to update corresponding histograms. As the task becomes I/O bound after applying spatial indexing and GPU hardware acceleration, we have developed a GPU-based data compression technique by reusing our previous work on Bitplane Quadtree (or BPQ-Tree) based indexing of binary bitmaps. Results have shown that our GPU-based parallel Zonal Statistic technique on 3000+ US counties over 20+ billion NASA SRTM 30 meter resolution Digital Elevation (DEM) raster cells has achieved impressive end-to-end runtimes: 101 seconds and 46 seconds a low-end workstation equipped with a Nvidia GTX Titan GPU using cold and hot cache, respectively; and, 60-70 seconds using a single OLCF TITAN computing node and 10-15 seconds using 8 nodes. Our experiment results clearly show the potentials of using high-end computing facilities for large-scale geospatial processing.
Parallelization of implicit finite difference schemes in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel
1990-01-01
Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.
2015-10-01
We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
Structural analysis of poly-SUMO chain recognition by the RNF4-SIMs domain.
Kung, Camy C-H; Naik, Mandar T; Wang, Szu-Huan; Shih, Hsiu-Ming; Chang, Che-Chang; Lin, Li-Ying; Chen, Chia-Lin; Ma, Che; Chang, Chi-Fon; Huang, Tai-Huang
2014-08-15
The E3 ubiquitin ligase RNF4 (RING finger protein 4) contains four tandem SIM [SUMO (small ubiquitin-like modifier)-interaction motif] repeats for selective interaction with poly-SUMO-modified proteins, which it targets for degradation. We employed a multi-faceted approach to characterize the structure of the RNF4-SIMs domain and the tetra-SUMO2 chain to elucidate the interaction between them. In solution, the SIM domain was intrinsically disordered and the linkers of the tetra-SUMO2 were highly flexible. Individual SIMs of the RNF4-SIMs domains bind to SUMO2 in the groove between the β2-strand and the α1-helix parallel to the β2-strand. SIM2 and SIM3 bound to SUMO with a high affinity and together constituted the recognition module necessary for SUMO binding. SIM4 alone bound to SUMO with low affinity; however, its contribution to tetra-SUMO2 binding avidity is comparable with that of SIM3 when in the RNF4-SIMs domain. The SAXS data of the tetra-SUMO2-RNF4-SIMs domain complex indicate that it exists as an ordered structure. The HADDOCK model showed that the tandem RNF4-SIMs domain bound antiparallel to the tetra-SUMO2 chain orientation and wrapped around the SUMO protamers in a superhelical turn without imposing steric hindrance on either molecule.
Methods for compressible fluid simulation on GPUs using high-order finite differences
NASA Astrophysics Data System (ADS)
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin
2018-06-01
The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.
Bader, R; Bettio, A; Beck-Sickinger, A G; Zerbe, O
2001-01-12
The biological importance of the neuropeptide Y (NPY) has steered a number of investigations about its solution structure over the last 20 years. Here, we focus on the comparison of the structure and dynamics of NPY free in solution to when bound to a membrane mimetic, dodecylphosphocholine (DPC) micelles, as studied by 2D (1)H NMR spectroscopy. Both, free in solution and in the micelle-bound form, the N-terminal segment (Tyr1-Glu15) is shown to extend like a flexible tail in solution. This is not compatible with the PP-fold model for NPY that postulates backfolding of the flexible N terminus onto the C-terminal helix. The correlation time (tau(c)) of NPY in aqueous solution, 5.5 (+/-1.0) ns at 32 degrees C, is only consistent with its existence in a dimeric form. Exchange contributions especially enhancing transverse relaxation rates (R(2)) of residues located on one side of the C-terminal helix of the molecule are supposed to originate from dimerization of the NPY molecule. The dimerization interface was directly probed by looking at (15)N-labeled NPY/spin-labeled [TOAC34]-[(14)N]-NPY heterodimers and revealed both parallel and anti-parallel alignment of the helices. The NMR-derived three-dimensional structure of micelle-bound NPY at 37 degrees C and pH 6.0 is similar but not identical to that free in solution. The final set of 17 lowest-energy DYANA structures is particularly well defined in the region of residues 21-31, with a mean pairwise RMSD of 0.23 A for the backbone heavy atoms and 0.85 A for all heavy atoms. The combination of NMR relaxation data and CD measurements clearly demonstrates that the alpha-helical region Ala18-Thr32 is more stable, and the C-terminal tetrapeptide becomes structured only in the presence of the phosphocholine micelles. The position of NPY relative to the DPC micelle surface was probed by adding micelle integrating spin labels. Together with information from (1)H,(2)H exchange rates, we conclude that the interaction of NPY with the micelle is promoted by the amphiphilic alpha-helical segment of residues Tyr21-Thr32. NPY is located at the lipid-water interface with its C-terminal helix parallel to the membrane surface and penetrates the hydrophobic interior only via insertions of a few long aliphatic or aromatic side-chains. From these data we can demonstrate that the dimer interface of neuropeptide Y is similar to the interface of the monomer binding to DPC-micelles. We speculate that binding of the NPY monomer to the membrane is an essential key step preceeding receptor binding, thereby pre-orientating the C-terminal tetrapeptide and possibly inducing the bio-active conformation. Copyright 2001 Academic Press.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Algorithm implementation on the Navier-Stokes computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krist, S.E.; Zang, T.A.
1987-03-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
A parallel-machine scheduling problem with two competing agents
NASA Astrophysics Data System (ADS)
Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya
2017-06-01
Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Samarin, Viacheslav
2018-02-01
Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.
Analysis of cell flux in the parallel plate flow chamber: implications for cell capture studies.
Munn, L L; Melder, R J; Jain, R K
1994-01-01
The parallel plate flow chamber provides a controlled environment for determinations of the shear stress at which cells in suspension can bind to endothelial cell monolayers. By decreasing the flow rate of cell-containing media over the monolayer and assessing the number of cells bound at each wall shear stress, the relationship between shear force and binding efficiency can be determined. The rate of binding should depend on the delivery of cells to the surface as well as the intrinsic cell-surface interactions; thus, only if the cell flux to the surface is known can the resulting binding curves be interpreted correctly. We present the development and validation of a mathematical model based on the sedimentation rate and velocity profile in the chamber for the delivery of cells from a flowing suspension to the chamber surface. Our results show that the flux depends on the bulk cell concentration, the distance from the entrance point, and the flow rate of the cell-containing medium. The model was then used in a normalization procedure for experiments in which T cells attach to TNF-alpha-stimulated HUVEC monolayers, showing that a threshold for adhesion occurs at a shear stress of about 3 dyn/cm2. Images FIGURE 1 FIGURE 2 PMID:7948702
Kametani, Shunsuke; Tasei, Yugo; Nishimura, Akio; Asakura, Tetsuo
2017-08-09
Polyalanine (polyA) sequences are well known as the simplest sequence that naturally forms anti-parallel β-sheets and constitute a key element in the structure of spider and wild silkworm silk fibers. We have carried out a systematic analysis of the packing of anti-parallel β-sheets for (Ala) n , n = 5, 6, 7 and 12, using primarily 13 C solid-state NMR and MD simulation. HFIP and TFA are frequently used as the dope solvents for recombinant silks, and polyA was solidified from both HFIP and TFA solutions by drying. An analysis of Ala Cβ peaks in the 13 C CP/MAS NMR spectra indicated that polyA from HFIP was mainly rectangular but polyA from TFA was mainly staggered. The transition from the rectangular to the staggered arrangement in (Ala) 6 was observed for the first time from the change in the Ala Cβ peak through heat treatment at 200 °C for 4 h. The removal of the bound water was confirmed by thermal analysis. This transition could be reproduced by MD simulation of (Ala) 6 molecules at 200 °C after removal of the bound water molecules. In this way, the origin of the stability of the different packing arrangements of polyA was clarified.
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Børretzen, P; Salbu, B
2000-10-30
To assess the impact of radionuclides entering the marine environment from dumped nuclear waste, information on the physico-chemical forms of radionuclides and their mobility in seawater-sediment systems is essential. Due to interactions with sediment components, sediments may act as a sink, reducing the mobility of radionuclides in seawater. Due to remobilisation, however, contaminated sediments may also act as a potential source of radionuclides to the water phase. In the present work, time-dependent interactions of low molecular mass (LMM, i.e. species < 10 kDa) radionuclides with sediments from the Stepovogo Fjord, Novaya Zemlya and their influence on the distribution coefficients (Kd values) have been studied in tracer experiments using 109Cd2+ and 60Co2+ as gamma tracers. Sorption of the LMM tracers occurred rapidly and the estimated equilibrium Kd(eq)-values for 109Cd and 60Co were 500 and 20000 ml/g, respectively. Remobilisation of 109Cd and 60Co from contaminated sediment fractions as a function of contact time was studied using sequential extraction procedures. Due to redistribution, the reversibly bound fraction of the gamma tracers decreased with time, while the irreversibly (or slowly reversibly) associated fraction of the gamma tracers increased. Two different three-compartment models, one consecutive and one parallel, were applied to describe the time-dependent interaction of the LMM tracers with operationally defined reversible and irreversible (or slowly reversible) sediment fractions. The interactions between these fractions were described using first order differential equations. By fitting the models to the experimental data, apparent rate constants were obtained using numerical optimisation software. The model optimisations showed that the interactions of LMM 60Co were well described by the consecutive model, while the parallel model was more suitable to describe the interactions of LMM 109Cd with the sediments, when the squared sum of residuals were compared. The rate of sorption of the irreversibly (or slowly reversibly) associated fraction was greater than the rate of desorption of the reversibly bound fractions (i.e. k3 > k2) for both radionuclides. Thus, the Novaya Zemlya sediment are supposed to act as a sink for the radionuclides under oxic conditions, and transport to the water phase should mainly be attributed to resuspended particles.
46 CFR 42.30-30 - Enclosed seas.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Enclosed seas. 42.30-30 Section 42.30-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Zones, Areas, and Seasonal Periods § 42.30-30 Enclosed seas. (a) Baltic Sea. This sea bounded by the parallel...
46 CFR 42.30-30 - Enclosed seas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Enclosed seas. 42.30-30 Section 42.30-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Zones, Areas, and Seasonal Periods § 42.30-30 Enclosed seas. (a) Baltic Sea. This sea bounded by the parallel...
46 CFR 42.30-30 - Enclosed seas.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Enclosed seas. 42.30-30 Section 42.30-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Zones, Areas, and Seasonal Periods § 42.30-30 Enclosed seas. (a) Baltic Sea. This sea bounded by the parallel...
46 CFR 42.30-30 - Enclosed seas.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Enclosed seas. 42.30-30 Section 42.30-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Zones, Areas, and Seasonal Periods § 42.30-30 Enclosed seas. (a) Baltic Sea. This sea bounded by the parallel...
46 CFR 42.30-30 - Enclosed seas.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Enclosed seas. 42.30-30 Section 42.30-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) LOAD LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Zones, Areas, and Seasonal Periods § 42.30-30 Enclosed seas. (a) Baltic Sea. This sea bounded by the parallel...
Yokoyama, Masaru; Nomaguchi, Masako; Doi, Naoya; Kanda, Tadahito; Adachi, Akio; Sato, Hironori
2016-01-01
Variable V1/V2 and V3 loops on human immunodeficiency virus type 1 (HIV-1) envelope-gp120 core play key roles in modulating viral competence to recognize two infection receptors, CD4 and chemokine-receptors. However, molecular bases for the modulation largely remain unclear. To address these issues, we constructed structural models for a full-length gp120 in CD4-free and -bound states. The models showed topologies of gp120 surface loop that agree with those in reported structural data. Molecular dynamics simulation showed that in the unliganded state, V1/V2 loop settled into a thermodynamically stable arrangement near V3 loop for conformational masking of V3 tip, a potent neutralization epitope. In the CD4-bound state, however, V1/V2 loop was rearranged near the bound CD4 to support CD4 binding. In parallel, cell-based adaptation in the absence of anti-viral antibody pressures led to the identification of amino acid substitutions that individually enhance viral entry and growth efficiencies in association with reduced sensitivity to CCR5 antagonist TAK-779. Notably, all these substitutions were positioned on the receptors binding surfaces in V1/V2 or V3 loop. In silico structural studies predicted some physical changes of gp120 by substitutions with alterations in viral replication phenotypes. These data suggest that V1/V2 loop is critical for creating a gp120 structure that masks co-receptor binding site compatible with maintenance of viral infectivity, and for tuning a functional balance of gp120 between immune escape ability and infectivity to optimize HIV-1 replication fitness. PMID:26903989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio
Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less
MER : from landing to six wheels on Mars ... twice
NASA Technical Reports Server (NTRS)
Krajewski, Joel; Burke, Kevin; Lewicki, Chris; Limonadi, Daniel; Trebi-Ollennu, Ashitey; Voorhees, Chris
2005-01-01
Application of the Pathfinder landing system design to enclose the much larger Mars Exploration Rover required a variety of Rover deployments to achieve the surface driving configuration. The project schedule demanded that software design, engineering model test, and flight hardware build to be accomplished in parallel. This challenge was met through (a) bounding unknown environments against which to design and test, (b) early mechanical prototype testing, (c) constraining the scope of on-board autonomy to survival-critical deployments, (d) executing a balance of nominal and off-nominal test cases, (e) developing off-nominal event mitigation techniques before landing, (f) flexible replanning in response to surprises during operations. Here is discussed several specific events encountered during initial MER surface operations.
Lazy checkpoint coordination for bounding rollback propagation
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Fuchs, W. Kent
1992-01-01
Independent checkpointing allows maximum process autonomy but suffers from potential domino effects. Coordinated checkpointing eliminates the domino effect by sacrificing a certain degree of process autonomy. In this paper, we propose the technique of lazy checkpoint coordination which preserves process autonomy while employing communication-induced checkpoint coordination for bounding rollback propagation. The introduction of the notion of laziness allows a flexible trade-off between the cost for checkpoint coordination and the average rollback distance. Worst-case overhead analysis provides a means for estimating the extra checkpoint overhead. Communication trace-driven simulation for several parallel programs is used to evaluate the benefits of the proposed scheme for real applications.
Design of linear quadratic regulators with eigenvalue placement in a specified region
NASA Technical Reports Server (NTRS)
Shieh, Leang-San; Zhen, Liu; Coleman, Norman P.
1990-01-01
Two linear quadratic regulators are developed for placing the closed-loop poles of linear multivariable continuous-time systems within the common region of an open sector, bounded by lines inclined at +/- pi/2k (for a specified integer k not less than 1) from the negative real axis, and the left-hand side of a line parallel to the imaginary axis in the complex s-plane, and simultaneously minimizing a quadratic performance index. The design procedure mainly involves the solution of either Liapunov equations or Riccati equations. The general expression for finding the lower bound of a constant gain gamma is also developed.
Pharmacokinetics and repolarization effects of intravenous and transdermal granisetron.
Mason, Jay W; Selness, Daniel S; Moon, Thomas E; O'Mahony, Bridget; Donachie, Peter; Howell, Julian
2012-05-15
The need for greater clarity about the effects of 5-HT(3) receptor antagonists on cardiac repolarization is apparent in the changing product labeling across this therapeutic class. This study assessed the repolarization effects of granisetron, a 5-HT(3) receptor antagonist antiemetic, administered intravenously and by a granisetron transdermal system (GTDS). In a parallel four-arm study, healthy subjects were randomized to receive intravenous granisetron, GTDS, placebo, or oral moxifloxacin (active control). The primary endpoint was difference in change from baseline in mean Fridericia-corrected QT interval (QTcF) between GTDS and placebo (ddQTcF) on days 3 and 5. A total of 240 subjects were enrolled, 60 in each group. Adequate sensitivity for detection of QTc change was shown by a 5.75 ms lower bound of the 90% confidence interval (CI) for moxifloxacin versus placebo at 2 hours postdose on day 3. Day 3 ddQTcF values varied between 0.2 and 1.9 ms for GTDS (maximum upper bound of 90% CI, 6.88 ms), between -1.2 and 1.6 ms for i.v. granisetron (maximum upper bound of 90% CI, 5.86 ms), and between -3.4 and 4.7 ms for moxifloxacin (maximum upper bound of 90% CI, 13.45 ms). Day 5 findings were similar. Pharmacokinetic-ddQTcF modeling showed a minimally positive slope of 0.157 ms/(ng/mL), but a very low correlation (r = 0.090). GTDS was not associated with statistically or clinically significant effects on QTcF or other electrocardiographic variables. This study provides useful clarification on the effect of granisetron delivered by GTDS on cardiac repolarization. ©2012 AACR.
Elements of radiative interactions in gaseous systems
NASA Technical Reports Server (NTRS)
Tiwari, Surendra N.
1991-01-01
Basic formulations, analyses, and numerical procedures are presented to study radiative interactions in gray as well as nongray gases under different physical and flow conditions. After preliminary fluid-dynamical considerations, essential governing equations for radiative transport are presented that are applicable under local and nonlocal thermodynamic equilibrium conditions. Auxiliary relations for relaxation times and spectral absorption model are also provided. For specific applications, several simple gaseous systems are analyzed. The first system considered consists of a gas bounded by two parallel plates having the same temperature. For this system, both vibrational nonequilibrium effects and radiation conduction interactions are studied. The second system consists of fully developed laminar flow and heat transfer in a parallel plate duct under the boundary condition of a uniform surface heat flux. For this system, effects of gray surface emittance are studied. With the single exception of a circular geometry, the third system is identical to the second system. Here, the influence of nongray walls is also studied, and a correlation between the parallel plates and circular tube results is presented. The particular gases selected are CO, CO2, H2O, CH4, N2O, NH3, OH, and NO. The temperature and pressure range considered are 300 to 2000 K, and 0.1 to 100 atmosphere, respectively. Illustrative results obtained for different cases are discussed and some specific conclusions are provided.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
NASA Astrophysics Data System (ADS)
Monakhov, A. A.; Chernyavski, V. M.; Shtemler, Yu.
2013-09-01
Bounds of cavitation inception are experimentally determined in a creeping flow between eccentric cylinders, the inner one being static and the outer rotating at a constant angular velocity, Ω. The geometric configuration is additionally specified by a small minimum gap between cylinders, H, as compared with the radii of the inner and outer cylinders. For some values H and Ω, cavitation bubbles are observed, which are collected on the surface of the inner cylinder and equally distributed over the line parallel to its axis near the downstream minimum gap position. Cavitation occurs for the parameters {H,Ω} within a region bounded on the right by the cavitation inception curve that passes through the plane origin and cannot exceed the asymptotic threshold value of the minimum gap, Ha, in whose vicinity cavitation may occur at H < Ha only for high angular rotation velocities.
Gate tunable parallel double quantum dots in InAs double-nanowire devices
NASA Astrophysics Data System (ADS)
Baba, S.; Matsuo, S.; Kamata, H.; Deacon, R. S.; Oiwa, A.; Li, K.; Jeppesen, S.; Samuelson, L.; Xu, H. Q.; Tarucha, S.
2017-12-01
We report fabrication and characterization of InAs nanowire devices with two closely placed parallel nanowires. The fabrication process we develop includes selective deposition of the nanowires with micron scale alignment onto predefined finger bottom gates using a polymer transfer technique. By tuning the double nanowire with the finger bottom gates, we observed the formation of parallel double quantum dots with one quantum dot in each nanowire bound by the normal metal contact edges. We report the gate tunability of the charge states in individual dots as well as the inter-dot electrostatic coupling. In addition, we fabricate a device with separate normal metal contacts and a common superconducting contact to the two parallel wires and confirm the dot formation in each wire from comparison of the transport properties and a superconducting proximity gap feature for the respective wires. With the fabrication techniques established in this study, devices can be realized for more advanced experiments on Cooper-pair splitting, generation of Parafermions, and so on.
New sample cell configuration for wide-frequency dielectric spectroscopy: DC to radio frequencies.
Nakanishi, Masahiro; Sasaki, Yasutaka; Nozaki, Ryusuke
2010-12-01
A new configuration for the sample cell to be used in broadband dielectric spectroscopy is presented. A coaxial structure with a parallel plate capacitor (outward parallel plate cell: OPPC) has made it possible to extend the frequency range significantly in comparison with the frequency range of the conventional configuration. In the proposed configuration, stray inductance is significantly decreased; consequently, the upper bound of the frequency range is improved by two orders of magnitude from the upper limit of conventional parallel plate capacitor (1 MHz). Furthermore, the value of capacitance is kept high by using a parallel plate configuration. Therefore, the precision of the capacitance measurement in the lower frequency range remains sufficiently high. Finally, OPPC can cover a wide frequency range (100 Hz-1 GHz) with an appropriate admittance measuring apparatus such as an impedance or network analyzer. The OPPC and the conventional dielectric cell are compared by examining the frequency dependence of the complex permittivity for several polar liquids and polymeric films.
3-D Numerical Modelling of Oblique Continental Collisions with ASPECT
NASA Astrophysics Data System (ADS)
Karatun, L.; Pysklywec, R.
2017-12-01
Among the fundamental types of tectonic plate boundaries, continent-continent collision is least well understood. Deformation of the upper and middle crustal layers can be inferred from surface structures and geophysical imaging, but the fate of lower crustal rocks and mantle lithosphere is not well resolved. Previous research suggests that shortening of mantle lithosphere generally may be occurring by either: 1) a distributed thickening with a formation of a Raleigh-Tailor (RT) type instability (possibly accompanied with lithospheric folding); or 2) plate-like subduction, which can be one- or two-sided, with or without delamination and slab break-off; a combination of both could be taking place too. 3-D features of the orogens such as along-trench material transfer, bounding subduction zones can influence the evolution of the collision zone significantly. The current study was inspired by South Island of New Zealand - a young collision system where a block of continental crust is being shortened by the relative Australian-Pacific plate motion. The collision segment of the plate boundary is relatively small ( 800 km), and is bounded by oppositely verging subduction zones to the North and South. Here, we present results of 3-D forward numerical modelling of continental collision to investigate some of these processes. To conduct the simulations, we used ASPECT - a highly parallel community-developed code based on the Finite Element method. Model setup for three different sets of models featured 2-D vertical across strike, 3-D with periodic front and back walls, and 3-D with open front and back walls, with velocities prescribed on the left and right faces. We explored the importance of values of convergent velocity, strike-slip velocity and their ratio, which defines the resulting velocity direction relative to the plate boundary (obliquity). We found that higher strike-slip motion promotes strain localization, weakens the lithosphere close to the plate boundary and pushes the balance towards RT instability. Incorporation of the bounding subduction zones caused large amount of material to be pulled out through the sides of the model and into the subduction channel, with slab tear happening at high obliquity values.
Real-time million-synapse simulation of rat barrel cortex.
Sharp, Thomas; Petersen, Rasmus; Furber, Steve
2014-01-01
Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.
Real-time million-synapse simulation of rat barrel cortex
Sharp, Thomas; Petersen, Rasmus; Furber, Steve
2014-01-01
Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development. PMID:24910593
ERIC Educational Resources Information Center
Kubota, Ryuko
2016-01-01
In applied linguistics and language education, an increased focus has been placed on plurality and hybridity to challenge monolingualism, the native speaker norm, and the modernist view of language and language use as unitary and bounded. The multi/plural turn parallels postcolonial theory in that they both support hybridity and fluidity while…
Sleep Benefits in Parallel Implicit and Explicit Measures of Episodic Memory
ERIC Educational Resources Information Center
Weber, Frederik D.; Wang, Jing-Yi; Born, Jan; Inostroza, Marion
2014-01-01
Research in rats using preferences during exploration as a measure of memory has indicated that sleep is important for the consolidation of episodic-like memory, i.e., memory for an event bound into specific spatio-temporal context. How these findings relate to human episodic memory is unclear. We used spontaneous preferences during visual…
NASA Astrophysics Data System (ADS)
Davis, Scott; Anderson, David T.; Farrell, John T., Jr.; Nesbitt, David J.
1996-06-01
High resolution near infrared spectra of the two high frequency intramolecular modes in (DF)2 have been characterized using a slit-jet infrared spectrometer. In total, four pairs of vibration-rotation-tunneling (VRT) bands are observed, corresponding to K=0 and K=1 excitation of both the ν2 (``bound'') and ν1 (``free'') intramolecular DF stretching modes. Analysis of the rotationally resolved spectra provides vibrational origins, rotational constants, tunneling splittings and upper state predissociation lifetimes for all four states. The rotational constants indicate that the deuterated hydrogen bond contracts and bends upon intramolecular excitation, analogous to what has been observed for (HF)2. The isotope and K dependence of tunneling splittings for (HF)2 and (DF)2 in both intramolecular modes is interpreted in terms of a semiclassical 1-D tunneling model. High resolution line shape measurements reveal vibrational predissociation broadening in (DF)2: 56(2) and 3(2) MHz for the ν2 (bound) and ν1 (free) intramolecular stretching modes, respectively. This 20-fold mode specific enhancement parallels the ≥30-fold enhancement observed between analogous intramolecular modes of (HF)2, further elucidating the role of nonstatistical predissociation dynamics in such hydrogen bonded clusters.
Performances of multiprocessor multidisk architectures for continuous media storage
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.
1996-03-01
Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.
Singularity in the positive Hall coeffcient near pre-onset temperatures in high-Tc superconductors
NASA Astrophysics Data System (ADS)
Vezzoli, G. C.; Chen, M. F.; Craver, F.; Moon, B. M.; Safari, A.; Burke, T.; Stanley, W.
1990-10-01
Hall measurements using continuous extremely slow cooling and reheating rates as well as employing eqiulibrium point-by-point conventional techniques reveals a clear anomally in RH at pre-onset temperatures near Tc in polycrystalline samples Y1Ba2Cu3O7 and Bi2Sr2Ca2Cu3O10. The anomaly has the appearance of a singularity of Dirac-delta function which parallels earlier work on La1-xSrxCuO4. Recent single crystal work on the Bi-containing high-Tc superconductor is in accord with a clearcut anomaly. The singularity is tentatively interpreted to be associated (upon cooling) with initially the removal of positive holes from the hopping conduction system of the normal state such as from the increased concentration of bound virtual excitons due to increased exciton and hole lifetimes at low temperature. Subsequently the formation of Cooper pairs by mediation from these centers (bound-holes) and/or bound excitons) may cause an ionization of these bound virtual excitons thereby re-introducing holes and electrons into the conduction system at Tc.
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Protein carboxyl methylation increases in parallel with differentiation of neuroblastoma cells.
Kloog, Y; Axelrod, J; Spector, I
1983-02-01
Cells of mouse neuroblastoma clone N1E-115 in the confluent phase of growth can catalyze the formation of endogenous protein carboxyl methyl esters, using a protein carboxyl methylase and membrane-bound methyl acceptor proteins. The enzyme is localized predominantly in the cytosol of the cells and has a molecular weight of about 20,000 daltons. Treatment of the cells with dimethylsulfoxide (DMSO) or hexamethylene-bisacetamide (HMBA), agents that induce morphological and electrophysiological differentiation, results in a marked increase in protein carboxyl methylase activity. Maximal levels are reached 6-7 days after exposure to the agents, a time course that closely parallels the development of electrical excitability mechanisms in these cells. Serum deprivation also causes neurite outgrowth but does not enhance electrical excitability or enzyme activity. The capacity of membrane-bound neuroblastoma protein(s) to be carboxyl methylated is increased by the differentiation procedures that have been examined. However, the increase in methyl acceptor proteins induced by DMSO or HMBA is the largest, and its time course parallels electrophysiological differentiation. In contrast, serum deprivation induced a small increase that reached maximal levels within 24 h. The data suggest that increased protein carboxyl methylation is a developmentally regulated property of neuroblastoma cells and that at least two groups of methyl acceptor proteins are induced during differentiation: a minor group related to morphological differentiation, and a major group that may be related to ionic permeability mechanisms of the excitable membrane.
Modeling borehole microseismic and strain signals measured by a distributed fiber optic sensor
NASA Astrophysics Data System (ADS)
Mellors, R. J.; Sherman, C. S.; Ryerson, F. J.; Morris, J.; Allen, G. S.; Messerly, M. J.; Carr, T.; Kavousi, P.
2017-12-01
The advent of distributed fiber optic sensors installed in boreholes provides a new and data-rich perspective on the subsurface environment. This includes the long-term capability for vertical seismic profiles, monitoring of active borehole processes such as well stimulation, and measuring of microseismic signals. The distributed fiber sensor, which measures strain (or strain-rate), is an active sensor with highest sensitivity parallel to the fiber and subject to varying types of noise, both external and internal. We take a systems approach and include the response of the electronics, fiber/cable, and subsurface to improve interpretation of the signals. This aids in understanding noise sources, assessing error bounds on amplitudes, and developing appropriate algorithms for improving the image. Ultimately, a robust understanding will allow identification of areas for future improvement and possible optimization in fiber and cable design. The subsurface signals are simulated in two ways: 1) a massively parallel multi-physics code that is capable of modeling hydraulic stimulation of heterogeneous reservoir with a pre-existing discrete fracture network, and 2) a parallelized 3D finite difference code for high-frequency seismic signals. Geometry and parameters for the simulations are derived from fiber deployments, including the Marcellus Shale Energy and Environment Laboratory (MSEEL) project in West Virginia. The combination mimics both the low-frequency strain signals generated during the fracture process and high-frequency signals from microseismic and perforation shots. Results are compared with available fiber data and demonstrate that quantitative interpretation of the fiber data provides valuable constraints on the fracture geometry and microseismic activity. These constraints appear difficult, if not impossible, to obtain otherwise.
Linker, Kevin L.; Brusseau, Charles A.
2002-01-01
A portal apparatus for screening persons or objects for the presence of trace amounts of target substances such as explosives, narcotics, radioactive materials, and certain chemical materials. The portal apparatus can have a one-sided exhaust for an exhaust stream, an interior wall configuration with a concave-shape across a horizontal cross-section for each of two facing sides to result in improved airflow and reduced washout relative to a configuration with substantially flat parallel sides; air curtains to reduce washout; ionizing sprays to collect particles bound by static forces, as well as gas jet nozzles to dislodge particles bound by adhesion to the screened person or object. The portal apparatus can be included in a detection system with a preconcentrator and a detector.
Self-assembly of skyrmion-dressed chiral nematic colloids with tangential anchoring.
Pandey, M B; Porenta, T; Brewer, J; Burkart, A; Copar, S; Zumer, S; Smalyukh, Ivan I
2014-06-01
We describe dipolar nematic colloids comprising mutually bound solid microspheres, three-dimensional skyrmions, and point defects in a molecular alignment field of chiral nematic liquid crystals. Nonlinear optical imaging and numerical modeling based on minimization of Landau-de Gennes free energy reveal that the particle-induced skyrmions resemble torons and hopfions, while matching surface boundary conditions at the interfaces of liquid crystal and colloidal spheres. Laser tweezers and videomicroscopy reveal that the skyrmion-colloidal hybrids exhibit purely repulsive elastic pair interactions in the case of parallel dipoles and an unexpected reversal of interaction forces from repulsive to attractive as the center-to-center distance decreases for antiparallel dipoles. The ensuing elastic self-assembly gives rise to colloidal chains of antiparallel dipoles with particles entangled by skyrmions.
On the progressive enrichment of the oxygen isotopic composition of water along a leaf.
Farquhar, G. D.; Gan, K. S.
2003-06-01
A model has been derived for the enrichment of heavy isotopes of water in leaves, including progressive enrichment along the leaf. In the model, lighter water is preferentially transpired leaving heavier water to diffuse back into the xylem and be carried further along the leaf. For this pattern to be pronounced, the ratio of advection to diffusion (Péclet number) has to be large in the longitudinal direction, and small in the radial direction. The progressive enrichment along the xylem is less than that occurring at the sites of evaporation in the mesophyll, depending on the isolation afforded by the radial Péclet number. There is an upper bound on enrichment, and effects of ground tissue associated with major veins are included. When transpiration rate is spatially nonuniform, averaging of enrichment occurs more naturally with transpiration weighting than with area-based weighting. This gives zero average enrichment of transpired water, the modified Craig-Gordon equation for average enrichment at the sites of evaporation and the Farquhar and Lloyd (In Stable Isotopes and Plant Carbon-Water Relations, pp. 47-70. Academic Press, New York, USA, 1993) prediction for mesophyll water. Earlier results on the isotopic composition of evolved oxygen and of retro-diffused carbon dioxide are preserved if these processes vary in parallel with transpiration rate. Parallel variation should be indicated approximately by uniform carbon isotope discrimination across the leaf.
Multiprocessing the Sieve of Eratosthenes
NASA Technical Reports Server (NTRS)
Bokhari, S.
1986-01-01
The Sieve of Eratosthenes for finding prime numbers in recent years has seen much use as a benchmark algorithm for serial computers while its intrinsically parallel nature has gone largely unnoticed. The implementation of a parallel version of this algorithm for a real parallel computer, the Flex/32, is described and its performance discussed. It is shown that the algorithm is sensitive to several fundamental performance parameters of parallel machines, such as spawning time, signaling time, memory access, and overhead of process switching. Because of the nature of the algorithm, it is impossible to get any speedup beyond 4 or 5 processors unless some form of dynamic load balancing is employed. We describe the performance of our algorithm with and without load balancing and compare it with theoretical lower bounds and simulated results. It is straightforward to understand this algorithm and to check the final results. However, its efficient implementation on a real parallel machine requires thoughtful design, especially if dynamic load balancing is desired. The fundamental operations required by the algorithm are very simple: this means that the slightest overhead appears prominently in performance data. The Sieve thus serves not only as a very severe test of the capabilities of a parallel processor but is also an interesting challenge for the programmer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penel-Nottaris, Emilie
2004-07-01
The Jefferson Lab Hall A experiment has measured the 3He(e,e'p) reaction cross sections. The separation of the longitudinal and transverse response functions for the two-body breakup reaction in parallel kinematics allows to study the bound proton electromagnetic properties in the 3He nucleus and the involved nuclear mechanisms beyond impulse approximation. Preliminary cross sections show some disagreement with theoretical predictions for the forward angles kinematics around 0 MeV/c missing momenta, and sensitivity to final state interactions and 3He wave functions for missing momenta of 300 MeV/c.
Ji, Jim; Wright, Steven
2005-01-01
Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.
Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro
2014-08-01
The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.
NASA Astrophysics Data System (ADS)
Barcos, L.; Díaz-Azpiroz, M.; Balanyá, J. C.; Expósito, I.; Jiménez-Bonilla, A.; Faccenna, C.
2016-07-01
The combination of analytical and analogue models gives new opportunities to better understand the kinematic parameters controlling the evolution of transpression zones. In this work, we carried out a set of analogue models using the kinematic parameters of transpressional deformation obtained by applying a general triclinic transpression analytical model to a tabular-shaped shear zone in the external Betic Chain (Torcal de Antequera massif). According to the results of the analytical model, we used two oblique convergence angles to reproduce the main structural and kinematic features of structural domains observed within the Torcal de Antequera massif (α = 15° for the outer domains and α = 30° for the inner domain). Two parallel inclined backstops (one fixed and the other mobile) reproduce the geometry of the shear zone walls of the natural case. Additionally, we applied digital particle image velocimetry (PIV) method to calculate the velocity field of the incremental deformation. Our results suggest that the spatial distribution of the main structures observed in the Torcal de Antequera massif reflects different modes of strain partitioning and strain localization between two domain types, which are related to the variation in the oblique convergence angle and the presence of steep planar velocity - and rheological - discontinuities (the shear zone walls in the natural case). In the 15° model, strain partitioning is simple and strain localization is high: a single narrow shear zone is developed close and parallel to the fixed backstop, bounded by strike-slip faults and internally deformed by R and P shears. In the 30° model, strain partitioning is strong, generating regularly spaced oblique-to-the backstops thrusts and strike-slip faults. At final stages of the 30° experiment, deformation affects the entire model box. Our results show that the application of analytical modelling to natural transpressive zones related to upper crustal deformation facilitates to constrain the geometrical parameters of analogue models.
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Metal atom dynamics in superbulky metallocenes: a comparison of (Cp(BIG))2Sn and (Cp(BIG))2Eu.
Harder, Sjoerd; Naglav, Dominik; Schwerdtfeger, Peter; Nowik, Israel; Herber, Rolfe H
2014-02-17
Cp(BIG)2Sn (Cp(BIG) = (4-n-Bu-C6H4)5cyclopentadienyl), prepared by reaction of 2 equiv of Cp(BIG)Na with SnCl2, crystallized isomorphous to other known metallocenes with this ligand (Ca, Sr, Ba, Sm, Eu, Yb). Similarly, it shows perfect linearity, C-H···C(π) bonding between the Cp(BIG) rings and out-of-plane bending of the aryl substituents toward the metal. Whereas all other Cp(BIG)2M complexes show large disorder in the metal position, the Sn atom in Cp(BIG)2Sn is perfectly ordered. In contrast, (119)Sn and (151)Eu Mößbauer investigations on the corresponding Cp(BIG)2M metallocenes show that Sn(II) is more dynamic and loosely bound than Eu(II). The large displacement factors in the group 2 and especially in the lanthanide(II) metallocenes Cp(BIG)2M can be explained by static metal disorder in a plane parallel to the Cp(BIG) rings. Despite parallel Cp(BIG) rings, these metallocenes have a nonlinear Cpcenter-M-Cpcenter geometry. This is explained by an ionic model in which metal atoms are polarized by the negatively charged Cp rings. The extent of nonlinearity is in line with trends found in M(2+) ion polarizabilities. The range of known calculated dipole polarizabilities at the Douglas-Kroll CCSD(T) level was extended with values (atomic units) for Sn(2+) 15.35, Sm(2+)(4f(6) (7)F) 9.82, Eu(2+)(4f(7) (8)S) 8.99, and Yb(2+)(4f(14) (1)S) 6.55. This polarizability model cannot be applied to predominantly covalently bound Cp(BIG)2Sn, which shows a perfectly ordered structure. The bent geometry of Cp*2Sn should therefore not be explained by metal polarizability but is due to van der Waals Cp*···Cp* attraction and (to some extent) to a small p-character component in the Sn lone pair.
NASA Astrophysics Data System (ADS)
Parker, S. D.
2016-12-01
The kinematic evolution of the eastern Snake River Plain (ESRP) remains highly contested. A lack of strike-slip faults bounding the ESRP serves as a primary assumption in many leading kinematic models. Recent GPS geodesy has highlighted possible shear zones along the ESRP yet regional strike-slip faults remain unidentified. Oblique movement within dense arrays of high-angle conjugate normal faults, paralleling the ESRP, occur within a discrete zone of 50 km on both margins of the ESRP. These features have long been attributed to progressive crustal flexure and subsidence within the ESRP, but are capable of accommodating the observed strain without necessitating large scale strike-slip faults. Deformation features within an extensive Neogene conglomerate provide field evidence for dextral shear in a transtensional system along the northern margin of the ESRP. Pressure-solution pits and cobble striations provide evidence for a horizontal ENE/WSW maximum principal stress orientation, consistent with the hypothesis of a dextral Centennial shear zone. Fold hinges, erosional surfaces and stratigraphic datums plunging perpendicular into the ESRP have been attributed to crustal flexure and subsidence of the ESRP. Similar Quaternary folds plunge obliquely into the ESRP along its margins where diminishing offset along active normal faults trends into linear volcanic features. In all cases, orientations and distributions of plunging fold structures display a correlation to the terminus of active Basin and Range faults and linear volcanic features of the ESRP. An alternative kinematic model, rooted in kinematic disparities between Basin and Range faults and parallelling volcanic features may explain the observed downwarping as well as provide a mechanism for the observed shear along the margins of the ESRP. By integrating field observations with seismic, geodetic and geomorphic observations this study attempts to decipher the signatures of crustal flexure and shear along the margins of the ESRP. Decoupling the influence of these distinct processes on deformation features bounding the ESRP will aid in our understanding of the kinematic evolution of this highly complex region.
Robust Bounded Influence Tests in Linear Models
1988-11-01
sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania
STABILITY OF SMALL SELF-INTERSTITIAL CLUSTERS IN TUNGSTEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Nandipati, Giridhar; Kurtz, Richard J.
2015-12-31
Density functional theory was employed to explore the stability of interstitial clusters in W up to size seven. For each cluster size, the most stable configuration consists of parallel dumbbells. For clusters larger than size three, parallel dumbbells prefer to form in a multilayer fashion, instead of a planar structure. For size-7 clusters, the most stable configuration is a complete octahedron. The binding energy of a [111] dumbbell to the most stable cluster increases with cluster size, namely 2.49, 3.68, 4.76, 4.82, 5.47, and 6.85 eV for clusters of size 1, 2, 3, 4, 5, and 6, respectively. For amore » size-2 cluster, collinear dumbbells are still repulsive at the maximum allowable distance of 13.8 Å (the fifth neighbor along [111]). On the other hand, parallel dumbbells are strongly bound together. Two parallel dumbbells in which the axis-to-axis distance is within a cylindrical radius of 5.2 Å still exhibit a considerable binding of 0.28 eV. The most stable cluster in each size will be used to explore interactions with transmutation products.« less
Free-standing leaping experiments with a power-autonomous elastic-spined quadruped
NASA Astrophysics Data System (ADS)
Pusey, Jason L.; Duperret, Jeffrey M.; Haynes, G. Clark; Knopf, Ryan; Koditschek, Daniel E.
2013-05-01
We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot's "body energy" during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid's initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits.
Kolafa, J; Perram, J W; Bywater, R P
2000-01-01
We have studied protein-ligand interactions by molecular dynamics simulations using software designed to exploit parallel computing architectures. The trajectories were analyzed to extract the essential motions and to estimate the individual contributions of fragments of the ligand to overall binding enthalpy. Two forms of the bound ligand are compared, one with the termini blocked by covalent derivatization, and one in the underivatized, zwitterionic form. The ends of the peptide tend to bind more loosely in the capped form. We can observe significant motions in the bound ligand and distinguish between motions of the peptide backbone and of the side chains. This could be useful in designing ligands, which fit optimally to the binding protein. We show that it is possible to determine the different contributions of each residue in a peptide to the enthalpy of binding. Proline is a major net contributor to binding enthalpy, in keeping with the known propensity for this family of proteins to bind proline-rich peptides. PMID:10919999
Effects of Zinc on Particulate Methane Monooxygenase Activity and Structure*
Sirajuddin, Sarah; Barupala, Dulmini; Helling, Stefan; Marcus, Katrin; Stemmler, Timothy L.; Rosenzweig, Amy C.
2014-01-01
Particulate methane monooxygenase (pMMO) is a membrane-bound metalloenzyme that oxidizes methane to methanol in methanotrophic bacteria. Zinc is a known inhibitor of pMMO, but the details of zinc binding and the mechanism of inhibition are not understood. Metal binding and activity assays on membrane-bound pMMO from Methylococcus capsulatus (Bath) reveal that zinc inhibits pMMO at two sites that are distinct from the copper active site. The 2.6 Å resolution crystal structure of Methylocystis species strain Rockwell pMMO reveals two previously undetected bound lipids, and metal soaking experiments identify likely locations for the two zinc inhibition sites. The first is the crystallographic zinc site in the pmoC subunit, and zinc binding here leads to the ordering of 10 previously unobserved residues. A second zinc site is present on the cytoplasmic side of the pmoC subunit. Parallels between these results and zinc inhibition studies of several respiratory complexes suggest that zinc might inhibit proton transfer in pMMO. PMID:24942740
Instabilities and pattern formation on the pore scale
NASA Astrophysics Data System (ADS)
Juel, Anne
What links a baby's first breath to adhesive debonding, enhanced oil recovery, or even drop-on-demand devices? All these processes involve moving or expanding bubbles displacing fluid in a confined space, bounded by either rigid or elastic walls. In this talk, we show how spatial confinement may either induce or suppress interfacial instabilities and pattern formation in such flows. We demonstrate that a simple change in the bounding geometry can radically alter the behaviour of a fluid-displacing air finger both in rigid and elastic vessels. A rich array of propagation modes, including steady and oscillatory fingers, is uncovered when air displaces oil from axially uniform tubes that have local variations in flow resistance within their cross-sections. Moreover, we show that the experimentally observed states can all be captured by a two-dimensional depth-averaged model for bubble propagation through wide channels. Viscous fingering in Hele-Shaw cells is a classical and widely studied fluid-mechanical instability: when air is injected into the narrow, liquid-filled gap between parallel rigid plates, the axisymmetrically expanding air-liquid interface tends to be unstable to non-axisymmetric disturbances. We show how the introduction of wall elasticity (via the replacement of the upper bounding plate by an elastic membrane) can weaken or even suppress the fingering instability by allowing changes in cell confinement through the flow-induced deflection of the boundary. The presence of a deformable boundary also makes the system prone to additional solid-mechanical instabilities, and these wrinkling instabilities can in turn enhance viscous fingering. The financial support of EPSRC and the Leverhulme Trust is gratefully acknowledged.
Multitasking TORT under UNICOS: Parallel performance models and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, A.; Azmy, Y.Y.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai
2016-09-01
The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.
Limits on Log Cross-Product Ratios for Item Response Models. Research Report. ETS RR-06-10
ERIC Educational Resources Information Center
Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip
2006-01-01
Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…
NASA Astrophysics Data System (ADS)
Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.
2015-07-01
We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Young Do; Finzi, Andrés; Wu, Xueling
2013-03-04
The HIV-1 envelope (Env) spike (gp120{sub 3}/gp41{sub 3}) undergoes considerable structural rearrangements to mediate virus entry into cells and to evade the host immune response. Engagement of CD4, the primary human receptor, fixes a particular conformation and primes Env for entry. The CD4-bound state, however, is prone to spontaneous inactivation and susceptible to antibody neutralization. How does unliganded HIV-1 maintain CD4-binding capacity and regulate transitions to the CD4-bound state? To define this mechanistically, we determined crystal structures of unliganded core gp120 from HIV-1 clades B, C, and E. Notably, all of these unliganded HIV-1 structures resembled the CD4-bound state. Conformationalmore » fixation with ligand selection and thermodynamic analysis of full-length and core gp120 interactions revealed that the tendency of HIV-1 gp120 to adopt the CD4-bound conformation was restrained by the V1/V2- and V3-variable loops. In parallel, we determined the structure of core gp120 in complex with the small molecule, NBD-556, which specifically recognizes the CD4-bound conformation of gp120. Neutralization by NBD-556 indicated that Env spikes on primary isolates rarely assume the CD4-bound conformation spontaneously, although they could do so when quaternary restraints were loosened. Together, the results suggest that the CD4-bound conformation represents a 'ground state' for the gp120 core, with variable loop and quaternary interactions restraining unliganded gp120 from 'snapping' into this conformation. A mechanism of control involving deformations in unliganded structure from a functionally critical state (e.g., the CD4-bound state) provides advantages in terms of HIV-1 Env structural diversity and resistance to antibodies and inhibitors, while maintaining elements essential for entry.« less
The Mentawai forearc sliver off Sumatra: A model for a strike-slip duplex at a regional scale
NASA Astrophysics Data System (ADS)
Berglar, Kai; Gaedicke, Christoph; Ladage, Stefan; Thöle, Hauke
2017-07-01
At the Sumatran oblique convergent margin the Mentawai Fault and Sumatran Fault zones accommodate most of the trench parallel component of strain. These faults bound the Mentawai forearc sliver that extends from the Sunda Strait to the Nicobar Islands. Based on multi-channel reflection seismic data, swath bathymetry and high resolution sub-bottom profiling we identified a set of wrench faults obliquely connecting the two major fault zones. These wrench faults separate at least four horses of a regional strike-slip duplex forming the forearc sliver. Each horse comprises an individual basin of the forearc with differing subsidence and sedimentary history. Duplex formation started in Mid/Late Miocene southwest of the Sunda Strait. Initiation of new horses propagated northwards along the Sumatran margin over 2000 km until Early Pliocene. These results directly link strike-slip tectonics to forearc evolution and may serve as a model for basin evolution in other oblique subduction settings.
Attention and choice: a review on eye movements in decision making.
Orquin, Jacob L; Mueller Loose, Simone
2013-09-01
This paper reviews studies on eye movements in decision making, and compares their observations to theoretical predictions concerning the role of attention in decision making. Four decision theories are examined: rational models, bounded rationality, evidence accumulation, and parallel constraint satisfaction models. Although most theories were confirmed with regard to certain predictions, none of the theories adequately accounted for the role of attention during decision making. Several observations emerged concerning the drivers and down-stream effects of attention on choice, suggesting that attention processes plays an active role in constructing decisions. So far, decision theories have largely ignored the constructive role of attention by assuming that it is entirely determined by heuristics, or that it consists of stochastic information sampling. The empirical observations reveal that these assumptions are implausible, and that more accurate assumptions could have been made based on prior attention and eye movement research. Future decision making research would benefit from greater integration with attention research. Copyright © 2013 Elsevier B.V. All rights reserved.
Evaluation of concurrent priority queue algorithms. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q.
1991-02-01
The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less
Turbulence intensities in large-eddy simulation of wall-bounded flows
NASA Astrophysics Data System (ADS)
Bae, H. J.; Lozano-Durán, A.; Bose, S. T.; Moin, P.
2018-01-01
A persistent problem in wall-bounded large-eddy simulations (LES) with Dirichlet no-slip boundary conditions is that the near-wall streamwise velocity fluctuations are overpredicted, while those in the wall-normal and spanwise directions are underpredicted. The problem may become particularly pronounced when the near-wall region is underresolved. The prediction of the fluctuations is known to improve for wall-modeled LES, where the no-slip boundary condition at the wall is typically replaced by Neumann and no-transpiration conditions for the wall-parallel and wall-normal velocities, respectively. However, the turbulence intensity peaks are sensitive to the grid resolution and the prediction may degrade when the grid is refined. In the present study, a physical explanation of this phenomena is offered in terms of the behavior of the near-wall streaks. We also show that further improvements are achieved by introducing a Robin (slip) boundary condition with transpiration instead of the Neumann condition. By using a slip condition, the inner energy production peak is damped, and the blocking effect of the wall is relaxed such that the splatting of eddies at the wall is mitigated. As a consequence, the slip boundary condition provides an accurate and consistent prediction of the turbulence intensities regardless of the near-wall resolution.
Replica Exchange Simulations of the Thermodynamics of Aβ Fibril Growth
Takeda, Takako; Klimov, Dmitri K.
2009-01-01
Abstract Replica exchange molecular dynamics and an all-atom implicit solvent model are used to probe the thermodynamics of deposition of Alzheimer's Aβ monomers on preformed amyloid fibrils. Consistent with the experiments, two deposition stages have been identified. The docking stage occurs over a wide temperature range, starting with the formation of the first peptide-fibril interactions at 500 K. Docking is completed when a peptide fully adsorbs on the fibril edge at the temperature of 380 K. The docking transition appears to be continuous, and occurs without free energy barriers or intermediates. During docking, incoming Aβ monomer adopts a disordered structure on the fibril edge. The locking stage occurs at the temperature of ≈360 K and is characterized by the rugged free energy landscape. Locking takes place when incoming Aβ peptide forms a parallel β-sheet structure on the fibril edge. Because the β-sheets formed by locked Aβ peptides are typically off-registry, the structure of the locked phase differs from the structure of the fibril interior. The study also reports that binding affinities of two distinct fibril edges with respect to incoming Aβ peptides are different. The peptides bound to the concave edge have significantly lower free energy compared to those bound on the convex edge. Comparison with the available experimental data is discussed. PMID:19167295
Entropy production in a photovoltaic cell
NASA Astrophysics Data System (ADS)
Ansari, Mohammad H.
2017-05-01
We evaluate entropy production in a photovoltaic cell that is modeled by four electronic levels resonantly coupled to thermally populated field modes at different temperatures. We use a formalism recently proposed, the so-called multiple parallel worlds, to consistently address the nonlinearity of entropy in terms of density matrix. Our result shows that entropy production is the difference between two flows: a semiclassical flow that linearly depends on occupational probabilities, and another flow that depends nonlinearly on quantum coherence and has no semiclassical analog. We show that entropy production in the cells depends on environmentally induced decoherence time and energy detuning. We characterize regimes where reversal flow of information takes place from a cold to hot bath. Interestingly, we identify a lower bound on entropy production, which sets limitations on the statistics of dissipated heat in the cells.
Grover Search and the No-Signaling Principle
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Jordan, Stephen P.
2016-09-01
Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.
Robust manipulation of light using topologically protected plasmonic modes.
Liu, Chenxu; Gurudev Dutt, M V; Pekker, David
2018-02-05
We propose using a topological plasmonic crystal structure composed of an array of nearly parallel nanowires with unequal spacing for manipulating light. In the paraxial approximation, the Helmholtz equation that describes the propagation of light along the nanowires maps onto the Schrödinger equation of the Su-Schrieffer-Heeger (SSH) model. Using a full three-dimensional finite difference time domain solution of the Maxwell equations, we verify the existence of topological defect modes, with sub-wavelength localization, bound to domain walls of the plasmonic crystal. We show that by manipulating domain walls we can construct spatial mode filters that couple bulk modes to topological defect modes, and topological beam-splitters that couple two topological defect modes. Finally, we show that the structures are tolerant to fabrication errors with an inverse length-scale smaller than the topological band gap.
Effects of Climate on Co-evolution of Weathering Profiles and Hillscapes
NASA Astrophysics Data System (ADS)
Anderson, R. S.; Rajaram, H.; Anderson, S. P.
2017-12-01
Considerable debate revolves around the relative importance of rock type, tectonics, and climate in creating the architecture of the critical zone. It has recently been proposed that differences in the depths and patterns of weathering between landscapes in Colorado's Front Range and South Carolina's piedmont can be attributed to the state of stress in the rock imposed by the magnitude and orientation the regional stresses with respect to the ridgelines (St. Claire et al., 2016). We argue for the importance of the climate, and in particular, in temperate regions, the amount of recharge. We employ numerical models of hillslope evolution between bounding erosional channels, in which the degree of rock weathering governs the rate of transformation of rock to soil. As the water table drapes between the stream channels, fresh rock is brought into the weathering zone at a rate governed by the rate of incision of the channels. We track the chemical weathering of rock, represented by alteration of feldspar to clays, which in turn requires calculation of the concentration of reactive species in the water along hydrologic flow paths. We present results from analytic solutions to the flow field in which travel times can be efficiently assessed. Below the water table, flow paths are hyperbolic, taking on considerable lateral components as they veer toward the bounding channels that serve as drains to the hillslope. We find that if water is far from equilibrium with respect to weatherable minerals at the water table, as occurs in wet, slowly-eroding landscapes, deep weathering can occur well below the water table to levels approximating the base of the bounding channels. In dry climates, on the other hand, the weathering zone is limited to a shallow surface - parallel layer. These models capture the essence of the observed differences in depth to fresh rock in both wet and dry climates without appeal to the state of stress in the rock.
Geometry of the southern San Andreas fault and its implications for seismic hazard
NASA Astrophysics Data System (ADS)
Langenheim, V. E.; Dorsey, R. J.; Fuis, G. S.; Cooke, M. L.; Fattaruso, L.; Barak, S.
2015-12-01
The southern San Andreas fault (SSAF) provides rich opportunities for studying the geometry and connectivity of fault stepovers and intersections, including recently recognized NE tilting of the Salton block between the SSAF and San Jacinto fault (SJF) that likely results from slight obliquity of relative plate motion to the strike of the SSAF. Fault geometry and predictions of whether the SSAF will rupture through the restraining bend in San Gorgonio Pass (SGP) are controversial, with significant implications for seismic hazard. The evolution of faulting in SGP has led to various models of strain accommodation, including clockwise rotation of fault-bounded blocks east of the restraining bend, and generation of faults that siphon strike slip away from the restraining bend onto the SJF (also parallel to the SSAF). Complex deformation is not restricted to the upper crust but extends to mid- and lower-crustal depths according to magnetic data and ambient-noise surface-wave tomography. Initiation of the SJF ~1.2 Ma led to formation of the relatively intact Salton block, and end of extension on the West Salton detachment fault on the west side of Coachella Valley. Geologic and geomorphic data show asymmetry of the southern Santa Rosa Mountains, with a steep fault-bounded SW flank produced by active uplift, and gentler topographic gradients on the NE flank with tilted, inactive late Pleistocene fans that are incised by modern upper fan channels. Gravity data indicate the basin floor beneath Coachella Valley is also asymmetric, with a gently NE-dipping basin floor bound by a steep SSAF; seismic-reflection data suggest that NE tilting took place during Quaternary time. 3D numerical modeling predicts gentle NE dips in the Salton block that result from the slight clockwise orientation of relative motion across a NE-dipping SSAF. A NE dip of the SSAF, supported by various geophysical datasets, would reduce shaking in Coachella Valley compared to a vertical fault.
Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models
NASA Astrophysics Data System (ADS)
Hussain, Shahid; McKay, Douglas W.
2006-03-01
We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeV
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification
Feng, Yang; Jiang, Jiancheng; Tong, Xin
2015-01-01
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Y., E-mail: yxc238@psu.edu; Randall, C. A.; Chen, L. Q.
2014-05-05
A self-consistent model has been proposed to study the switchable current-voltage (I-V) characteristics in Cu/BaTiO{sub 3}/Cu sandwiched structure combining the phase-field model of ferroelectric domains and diffusion equations for ionic/electronic transport. The electrochemical transport equations and Ginzburg-Landau equations are solved using the Chebyshev collocation algorithm. We considered a single parallel plate capacitor configuration which consists of a single layer BaTiO{sub 3} containing a single tetragonal domain orientated normal to the plate electrodes (Cu) and is subject to a sweep of ac bias from −1.0 to 1.0 V at 25 °C. Our simulation clearly shows rectifying I-V response with rectification ratios amount tomore » 10{sup 2}. The diode characteristics are switchable with an even larger rectification ratio after the polarization direction is flipped. The effects of interfacial polarization charge, dopant concentration, and dielectric constant on current responses were investigated. The switchable I-V behavior is attributed to the polarization bound charges that modulate the bulk conduction.« less
An error bound for a discrete reduced order model of a linear multivariable system
NASA Technical Reports Server (NTRS)
Al-Saggaf, Ubaid M.; Franklin, Gene F.
1987-01-01
The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.
Parallel Computation of Ocean-Atmosphere-Wave Coupled Storm Surge Model
NASA Astrophysics Data System (ADS)
Kim, K.; Yamashita, T.
2003-12-01
Ocean-atmosphere interactions are very important in the formation and development of tropical storms. These interactions are dominant in exchanging heat, momentum, and moisture fluxes. Heat flux is usually computed using a bulk equation. In this equation air-sea interface supplies heat energy to the atmosphere and to the storm. Dynamical interaction is most often one way in which it is the atmosphere that drives the ocean. The winds transfer momentum to both ocean surface waves and ocean current. The wind wave makes an important role in the exchange of the quantities of motion, heat and a substance between the atmosphere and the ocean. Storm surges can be considered as the phenomena of mean sea-level changes, which are the result of the frictional stresses of strong winds blowing toward the land and causing the set level and the low atmospheric pressure at the centre of the cyclone can additionally raise the sea level. In addition to the rise in water level itself, another wave factor must be considered. A rise of mean sea level due to white-cap wave dissipation should be considered. In bounded bodies of water, such as small seas, wind driven sea level set up is much serious than inverted barometer effects, in which the effects of wind waves on wind-driven current play an important role. It is necessary to develop the coupled system of the full spectral third-generation wind-wave model (WAM or WAVEWATCH III), the meso-scale atmosphere model (MM5) and the coastal ocean model (POM) for simulating these physical interactions. As the component of coupled system is so heavy for personal usage, the parallel computing system should be developed. In this study, first, we developed the coupling system of the atmosphere model, ocean wave model and the coastal ocean model, in the Beowulf System, for the simulation of the storm surge. It was applied to the storm surge simulation caused by Typhoon Bart (T9918) in the Yatsushiro Sea. The atmosphere model and the ocean model have been made the parallel codes by SPMD methods. The wave-current interface model was developed by defining the wave breaking stresses. And we developed the coupling program to collect and distribute the exchanging data with the parallel system. Every models and coupler are executed at same time, and they calculate own jobs and pass data with organic system. MPMD method programming was performed to couple the models. The coupler and each models united by the separated group, and they calculated by the group unit. Also they passed message when exchanging data by global unit. The data are exchanged every 60-second model time that is the least common multiple time of the atmosphere model, the wave model and the ocean model. The model was applied to the storm surge simulation in the Yatsushiro Sea, in which we could not simulated the observed maximum surge height with the numerical model that did not include the wave breaking stress. It is confirmed that the simulation which includes the wave breaking stress effects can produce the observed maximum height, 450 cm, at Matsuai.
Local SAR in Parallel Transmission Pulse Design
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L.; Adalsteinsson, Elfar
2011-01-01
The management of local and global power deposition in human subjects (Specific Absorption Rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx RF pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo MRI scan. Additionally, the algorithm yields a Protocol-specific Ultimate Peak in Local SAR (PUPiL SAR), which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7T eight-channel transmit array. The method reduced peak local 10g SAR by 14–66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. PMID:22083594
Outward Bound Outcome Model Validation and Multilevel Modeling
ERIC Educational Resources Information Center
Luo, Yuan-Chun
2011-01-01
This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir
2016-04-15
In a recent article [Niknam et al., Phys. Plasmas 20, 122106 (2013)], Niknam et al. investigated the propagation of TM surface waves on a semi-bounded quantum magnetized collisional plasma in the Faraday configuration (in this case, the magnetic field is parallel to the both of the plasma surface and direction of propagation). Here, we present a fresh look at the problem and show that TM surface waves cannot propagate on surface of the present system. We find in the Faraday configuration the surface waves acquire both TM and TE components due to the cyclotron motion of electrons. Therefore, the mainmore » result of the work by Niknam et al. is incorrect.« less
Entanglement asymmetry for boosted black branes and the bound
NASA Astrophysics Data System (ADS)
Mishra, Rohit; Singh, Harvendra
2017-06-01
We study the effects of asymmetry in the entanglement thermodynamics of CFT subsystems. It is found that “boosted” Dp-brane backgrounds give rise to the first law of the entanglement thermodynamics where the CFT pressure asymmetry plays a decisive role in the entanglement. Two different strip like subsystems, one parallel to the boost and the other perpendicular, are studied in the perturbative regime Tthermal ≪ TE. We mainly seek to quantify this entanglement asymmetry as a ratio of the first-order entanglement entropies of the excitations. We discuss the AdS-wave backgrounds at zero temperature having maximum asymmetry from where a bound on entanglement asymmetry is obtained. The entanglement asymmetry reduces as we switch on finite temperature in the CFT while it is maximum at zero temperature.
State-independent uncertainty relations and entanglement detection
NASA Astrophysics Data System (ADS)
Qian, Chen; Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of N-dimensional observables. The uncertainty relation for the incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations.
Krasavin, Mikhail; Shetnev, Anton; Sharonova, Tatyana; Baykov, Sergey; Tuccinardi, Tiziano; Kalinin, Stanislav; Angeli, Andrea; Supuran, Claudiu T
2018-02-01
A series of novel aromatic primary sulfonamides decorated with diversely substituted 1,2,4-oxadiazole periphery groups has been prepared using a parallel chemistry approach. The compounds displayed a potent inhibition of cytosolic hCA II and membrane-bound hCA IX isoforms. Due to a different cellular localization of the two target enzymes, the compounds can be viewed as selective inhibition tools for either isoform, depending on the cellular permeability profile. The SAR findings revealed in this study has been well rationalized by docking simulation of the key compounds against the crystal structures of the relevant hCA isoforms. Copyright © 2017. Published by Elsevier Inc.
V-Shaped Molecular Configuration of Wax Esters of Jojoba Oil in a Langmuir Film Model.
Caruso, Benjamín; Martini, M Florencia; Pickholz, Mónica; Perillo, María A
2018-06-19
The aim of the present work was to understand the interfacial properties of a complex mixture of wax esters (WEs) obtained from Jojoba oil (JO). Previously, on the basis of molecular area measurements, a hairpin structure was proposed as the hypothetical configuration of WEs, allowing their organization as compressible monolayers at the air-water interface. In the present work, we contributed with further experimental evidence by combining surface pressure (π), surface potential (Δ V), and PM-IRRAS measurements of JO monolayers and molecular dynamic simulations (MD) on a modified JO model. WEs were self-assembled in Langmuir films. Compression isotherms exhibited π lift-off at 100 Å 2 /molecule mean molecular area ( A lift-off ) and a collapse point at π c ≈ 2.2 mN/m and A c ≈ 77 Å 2 /molecule. The Δ V profile reflected two dipolar reorganizations, with one of them at A > A lift-off due to the release of loosely bound water molecules and another one at A c < A < A lift-off possibly due to reorientations of a more tightly bound water population. This was consistent with the maximal SP value that was calculated according to a model that considered two populations of oriented water and was very close to the experimental value. The orientation of the ester group that was assumed in that calculation was coherent with the PM-IRRAS behavior of the carbonyl group with the C═O oriented toward the water and the C-O oriented parallel to the surface and was in accordance with their orientational angles (∼45 and ∼90°, respectively) determined by MD simulations. Taken together, the present results confirm a V shape rather than a hairpin configuration of WEs at the air-water interface.
Chakraborty, Sushmita; Nandy, Sudipta; Barthakur, Abhijit
2015-02-01
We investigate coupled nonlinear Schrödinger equations (NLSEs) with variable coefficients and gain. The coupled NLSE is a model equation for optical soliton propagation and their interaction in a multimode fiber medium or in a fiber array. By using Hirota's bilinear method, we obtain the bright-bright, dark-bright combinations of a one-soliton solution (1SS) and two-soliton solutions (2SS) for an n-coupled NLSE with variable coefficients and gain. Crucial properties of two-soliton (dark-bright pair) interactions, such as elastic and inelastic interactions and the dynamics of soliton bound states, are studied using asymptotic analysis and graphical analysis. We show that a bright 2-soliton, in addition to elastic interactions, also exhibits multiple inelastic interactions. A dark 2-soliton, on the other hand, exhibits only elastic interactions. We also observe a breatherlike structure of a bright 2-soliton, a feature that become prominent with gain and disappears as the amplitude acquires a minimum value, and after that the solitons remain parallel. The dark 2-soliton, however, remains parallel irrespective of the gain. The results found by us might be useful for applications in soliton control, a fiber amplifier, all optical switching, and optical computing.
Multipactor saturation in parallel-plate waveguides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorolla, E.; Mattes, M.
2012-07-15
The saturation stage of a multipactor discharge is considered of interest, since it can guide towards a criterion to assess the multipactor onset. The electron cloud under multipactor regime within a parallel-plate waveguide is modeled by a thin continuous distribution of charge and the equations of motion are calculated taking into account the space charge effects. The saturation is identified by the interaction of the electron cloud with its image charge. The stability of the electron population growth is analyzed and two mechanisms of saturation to explain the steady-state multipactor for voltages near above the threshold onset are identified. Themore » impact energy in the collision against the metal plates decreases during the electron population growth due to the attraction of the electron sheet on the image through the initial plate. When this growth remains stable till the impact energy reaches the first cross-over point, the electron surface density tends to a constant value. When the stability is broken before reaching the first cross-over point the surface charge density oscillates chaotically bounded within a certain range. In this case, an expression to calculate the maximum electron surface charge density is found whose predictions agree with the simulations when the voltage is not too high.« less
Specialized Computer Systems for Environment Visualization
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.
2018-06-01
The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.
NASA Astrophysics Data System (ADS)
Zhao, Yinjian
2017-09-01
Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.
NASA Astrophysics Data System (ADS)
Boyd, John P.; Sanjaya, Edwin
2014-03-01
We revisit early models of steady western boundary currents [Gulf Stream, Kuroshio, etc.] to explore the role of irregular coastlines on jets, both to advance the research frontier and to illuminate for education. In the framework of a steady-state, quasigeostrophic model with viscosity, bottom friction and nonlinearity, we prove that rotating a straight coastline, initially parallel to the meridians, significantly thickens the western boundary layer. We analyze an infinitely long, straight channel with arbitrary orientation and bottom friction using an exact solution and singular perturbation theory, and show that the model, though simpler than Stommel's, nevertheless captures both the western boundary jet (“Gulf Stream”) and the “orientation effect”. In the rest of the article, we restrict attention to the Stommel flow (that is, linear and inviscid except for bottom friction) and apply matched asymptotic expansions, radial basis function, Fourier-Chebyshev and Chebyshev-Chebyshev pseudospectral methods to explore the effects of coastal geometry in a variety of non-rectangular domains bounded by a circle, parabolas and squircles. Although our oceans are unabashedly idealized, the narrow spikes, broad jets and stationary points vividly illustrate the power and complexity of coastal control of western boundary layers.
NASA Astrophysics Data System (ADS)
Yin, An; Pappalardo, Robert T.
2015-11-01
Despite a decade of intense research the mechanical origin of the tiger-stripe fractures (TSF) and their geologic relationship to the hosting South Polar Terrain (SPT) of Enceladus remain poorly understood. Here we show via systematic photo-geological mapping that the semi-squared SPT is bounded by right-slip, left-slip, extensional, and contractional zones on its four edges. Discrete deformation along the edges in turn accommodates translation of the SPT as a single sheet with its transport direction parallel to the regional topographic gradient. This parallel relationship implies that the gradient of gravitational potential energy drove the SPT motion. In map view, internal deformation of the SPT is expressed by distributed right-slip shear parallel to the SPT transport direction. The broad right-slip shear across the whole SPT was facilitated by left-slip bookshelf faulting along the parallel TSF. We suggest that the flow-like tectonics, to the first approximation across the SPT on Enceladus, is best explained by the occurrence of a transient thermal event, which allowed the release of gravitational potential energy via lateral viscous flow within the thermally weakened ice shell.
Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances
Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.
2017-01-01
As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442
Preparation of the Nuclear Matrix for Parallel Microscopy and Biochemical Analyses.
Wilson, Rosemary H C; Hesketh, Emma L; Coverley, Dawn
2016-01-04
Immobilized proteins within the nucleus are usually identified by treating cells with detergent. The detergent-resistant fraction is often assumed to be chromatin and is described as such in many studies. However, this fraction consists of both chromatin-bound and nuclear-matrix-bound proteins. To investigate nuclear-matrix-bound proteins alone, further separation of these fractions is required; the DNA must be removed so that the remaining proteins can be compared with those from untreated cells. This protocol uses a nonionic detergent (Triton X-100) to remove membranes and soluble proteins from cells under physiologically relevant salt concentrations, followed by extraction with 0.5 m NaCl, digestion with DNase I, and removal of fragmented DNA. It uses a specialized buffer (cytoskeletal buffer) to stabilize the cytoskeleton and nuclear matrix in relatively gentle conditions. Nuclear matrix proteins can then be assessed by either immunofluorescence (IF) and immunoblotting (IB). IB has the advantage of resolving different forms of a protein of interest, and the soluble fractions can be analyzed. The major advantage of IF analysis is that individual cells (rather than homogenized populations) can be monitored, and the spatial arrangement of proteins bound to residual nuclear structures can be revealed. © 2016 Cold Spring Harbor Laboratory Press.
NASA Astrophysics Data System (ADS)
Kaur, Mandeep; Singh, BirBikram; Sharma, Manoj K.; Gupta, Raj K.
2015-08-01
The dynamics of compound nuclei formed in the reactions using loosely bound projectiles are analyzed within the framework of the dynamical cluster-decay model (DCM) of Gupta and Collaborators. We have considered the reactions with neutron-rich and neutron-deficient projectiles, respectively, as 7Li , 9Be , and 7Be , on various targets at three different Elab energies, forming compound nuclei in the mass region A ˜30 - 200. For these reactions, the contributions of light-particle (LP, A ≤4 ) cross sections σLP, energetically favored intermediate-mass-fragment (IMF, 5 ≤A2≤20 ) cross sections σIMF, as well as the fusion-fission ff cross sections σff constitute the σfus(=σLP+σIMF+σff ), i.e., the contributions of the emitted LPs, IMFs, and ff fragments are added for all the angular momenta up to the ℓmax value for the respective reactions. Interestingly, we find that the empirically fitted neck-length parameter Δ Remp , the only parameter of the DCM, is uniquely fixed to address σfus for all the reactions having the same loosely bound projectile at a chosen incident laboratory energy. It may be noted that, in DCM, the dynamical collective mass motion of preformed LPs, IMFs, and ff fragments or clusters, through the modified interaction potential barrier, are treated on parallel footing. The modification of the barrier is due to nonzero Δ Remp , and the values of corresponding modified interaction-barrier heights Δ VBemp for such reactions are almost of the same order, specifically at the respective ℓmax value.
Gualtieri, R; Mollo, V; Braun, S; Barbato, V; Fiorentino, I; Talevi, R
2012-10-15
Different in vitro models have been developed to study the interaction of gametes and embryos with the maternal tract. In cattle, the interaction of the oviduct with gametes and embryos have been classically studied using oviductal explants or monolayers (OMs). Explants are well differentiated but have to be used within 24 h after collection, whereas OMs can be used for a longer time after cell confluence but dedifferentiate during culture, losing cell polarity and ciliation. Herein, OMs were cultured either in M199 plus 10% fetal calf serum or in a semidefined culture medium (Gray's medium), in an immersed condition on collagen-coated coated microporous polyester or polycarbonate inserts under air-liquid interface conditions. The influence of culture conditions on long-term viability and differentiation of OMs was evaluated through scanning electron microscopy, localization of centrin and tubulin at the confocal laser scanning microscope, and assessment of maintenance of viability of sperm bound to OMs. Findings demonstrated that OMs cultured in an immersed condition with Gray's medium retain a better morphology, do not exhibit signs of crisis at least until 3 wks postconfluence, and maintain the viability of bound sperm significantly better than parallel OMs cultured in M199 plus 10% fetal calf serum. OM culture with Gray's medium in air-liquid interface conditions on porous inserts promotes cell polarity, ciliation, and maintenance of bound sperm viability at least until 3 wks postconfluence. In conclusion, oviduct culture in Gray's medium in an immersed or air-liquid condition allows long-term culture and, in the latter case, also ciliation of bovine OMs, and may represent in vitro systems that mimick more closely the biological processes modulated by the oviduct in vivo. Copyright © 2012 Elsevier Inc. All rights reserved.
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
NASA Astrophysics Data System (ADS)
Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville
2017-01-01
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.
An ionic-chemical-mechanical model for muscle contraction.
Manning, Gerald S
2016-12-01
The dynamic process underlying muscle contraction is the parallel sliding of thin actin filaments along an immobile thick myosin fiber powered by oar-like movements of protruding myosin cross bridges (myosin heads). The free energy for functioning of the myosin nanomotor comes from the hydrolysis of ATP bound to the myosin heads. The unit step of translational movement is based on a mechanical-chemical cycle involving ATP binding to myosin, hydrolysis of the bound ATP with ultimate release of the hydrolysis products, stress-generating conformational changes in the myosin cross bridge, and relief of built-up stress in the myosin power stroke. The cycle is regulated by a transition between weak and strong actin-myosin binding affinities. The dissociation of the weakly bound complex by addition of salt indicates the electrostatic basis for the weak affinity, while structural studies demonstrate that electrostatic interactions among negatively charged amino acid residues of actin and positively charged residues of myosin are involved in the strong binding interface. We therefore conjecture that intermediate states of increasing actin-myosin engagement during the weak-to-strong binding transition also involve electrostatic interactions. Methods of polymer solution physics have shown that the thin actin filament can be regarded in some of its aspects as a net negatively charged polyelectrolyte. Here we employ polyelectrolyte theory to suggest how actin-myosin electrostatic interactions might be of significance in the intermediate stages of binding, ensuring an engaged power stroke of the myosin motor that transmits force to the actin filament, and preventing the motor from getting stuck in a metastable pre-power stroke state. We provide electrostatic force estimates that are in the pN range known to operate in the cycle. © 2016 Wiley Periodicals, Inc.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
Low-frequency surface waves on semi-bounded magnetized quantum plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir
2016-08-15
The propagation of low-frequency electrostatic surface waves on the interface between a vacuum and an electron-ion quantum plasma is studied in the direction perpendicular to an external static magnetic field which is parallel to the interface. A new dispersion equation is derived by employing both the quantum magnetohydrodynamic and Poisson equations. It is shown that the dispersion equations for forward and backward-going surface waves are different from each other.
Reducing Response Time Bounds for DAG-Based Task Systems on Heterogeneous Multicore Platforms
2016-01-01
synchronous parallel tasks on multicore platforms. In 25th ECRTS, 2013. [10] U. Devi. Soft Real - Time Scheduling on Multiprocessors. PhD thesis...report, Washington University in St Louis, 2014. [18] C. Liu and J. Anderson. Supporting soft real - time DAG-based sys- tems on multiprocessors with...analysis for DAG-based real - time task systems im- plemented on heterogeneous multicore platforms. The spe- cific analysis problem that is considered was
Makran Mountain Range, Iran and Pakistan
NASA Technical Reports Server (NTRS)
1983-01-01
The long folded mountain ridges and valleys of the coastal Makran Ranges of Iran and Pakistan (26.0N, 63.0E) illustrate the classical Trellis type of drainage pattern, common in this region. The Dasht River and its tributaries is the principal drainage network for this area. To the left, the continental drift of the northward bound Indian sub-continent has caused the east/west parallel ranges to bend in a great northward arc.
Lieb-Robinson bounds for spin-boson lattice models and trapped ions.
Jünemann, J; Cadarso, A; Pérez-García, D; Bermudez, A; García-Ripoll, J J
2013-12-06
We derive a Lieb-Robinson bound for the propagation of spin correlations in a model of spins interacting through a bosonic lattice field, which satisfies a Lieb-Robinson bound in the absence of spin-boson couplings. We apply these bounds to a system of trapped ions and find that the propagation of spin correlations, as mediated by the phonons of the ion crystal, can be faster than the regimes currently explored in experiments. We propose a scheme to test the bounds by measuring retarded correlation functions via the crystal fluorescence.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
The Chemical and Biological Effects of cis-Dichlorodiammineplatinum (II), an Antitumor Agent, on DNA
Munchausen, Linda L.
1974-01-01
cis-Dichlorodiammineplatinum (II) binds irreversibly to the bases in DNA; the amount of platinum complex bound can be determined from changes in the ultraviolet absorption spectrum. As the ratio of platinum to phosphate is increased, an increasing inactivation of bacterial transforming DNA is observed. At a ratio that corresponds to spectrometric saturation, transforming activity is inactivated >105-fold. The trans isomer of the platinum complex, which is not effective against tumors, induces a similar inactivation of transforming DNA but with half the efficiency, indicating a different mode of binding. The sensitivity to inactivation by cis isomer varies slightly with the genetic marker assayed but is not dependent on the excision repair system. Uptake of DNA by competent cells is unaffected by bound platinum complex; however, integration of platinum-bound transforming DNA into the host genome decreases as the mole fraction of platinum increases. This loss of integration parallels the decreased transforming activity of the DNA. Although the drug induces interstrand crosslinks in DNA in vitro, these crosslinks are relatively rare events and cannot account for the observed inactivation. PMID:4548188
Ground-State Structure of the Proton-Bound Formate Dimer by Cold-Ion Infrared Action Spectroscopy.
Thomas, Daniel; Marianski, Mateusz; Mucha, Eike; Meijer, Gerard; Johnson, Mark A; von Helden, Gert
2018-06-19
The proton-bound dicarboxylate motif, RCOO-·H+·-OOCR, is a prevalent chemical configuration found in many condensed phase systems. We study the archetypal proton-bound formate dimer, HCOO-·H+·-OOCH, utilizing cold-ion infrared action spectroscopy in the photon energy range of 400-1800 cm-1. The spectrum obtained at ~0.4 K utilizing action spectroscopy of ions captured in helium nanodroplets is compared to that measured at ~10 K by photodissociation of Ar-ion complexes. Similar band patterns are obtained by the two techniques that are consistent with calculations for a C2 symmetry structure with a proton shared equally between the two formate moieties. Isotopic substitution experiments point to the nominal parallel stretch of the bridging proton appearing as a sharp, dominant feature near 600 cm-1. Multidimensional anharmonic calculations, however, reveal that the bridging proton motion is strongly coupled to the flanking -COO- framework, an effect that is qualitatively in line with the expected change in -C=O bond rehybridization upon protonation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spin-dependent recombination probed through the dielectric polarizability
Bayliss, Sam L.; Greenham, Neil C.; Friend, Richard H.; Bouchiat, Hélène; Chepelianskii, Alexei D
2015-01-01
Despite residing in an energetically and structurally disordered landscape, the spin degree of freedom remains a robust quantity in organic semiconductor materials due to the weak coupling of spin and orbital states. This enforces spin-selectivity in recombination processes which plays a crucial role in optoelectronic devices, for example, in the spin-dependent recombination of weakly bound electron-hole pairs, or charge-transfer states, which form in a photovoltaic blend. Here, we implement a detection scheme to probe the spin-selective recombination of these states through changes in their dielectric polarizability under magnetic resonance. Using this technique, we access a regime in which the usual mixing of spin-singlet and spin-triplet states due to hyperfine fields is suppressed by microwave driving. We present a quantitative model for this behaviour which allows us to estimate the spin-dependent recombination rate, and draw parallels with the Majorana–Brossel resonances observed in atomic physics experiments. PMID:26439933
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Thomas, H. H.; Wasilewski, P. J.
1981-01-01
An equivalent layer magnetization model is discussed. Inversion of long wavelength satellite magnetic anomaly data indicates a very magnetic source region centered in south central Kentucky. Refraction profiles suggest that the source of the gravity anomaly is a large mass of rock occupying much of the crustal thickness. The outline of the source delineated by gravity contours is also discernible in aeromagnetic anomaly patterns. The mafic plutonic complex, and several lines of evidence are consistent with a rift association. The body is, however, clearly related to the inferred position of the Grenville Front. It is bounded on the north by the fault zones of the 38th Parallel Lineament. It is suggested that such magnetization levels are achieved with magnetic mineralogies produced by normal oxidation and metamorphic processes and enhanced by viscous build-up, especially in mafic rocks of alkaline character.
Composite Intelligent Learning Control of Strict-Feedback Systems With Disturbance.
Xu, Bin; Sun, Fuchun
2018-02-01
This paper addresses the dynamic surface control of uncertain nonlinear systems on the basis of composite intelligent learning and disturbance observer in presence of unknown system nonlinearity and time-varying disturbance. The serial-parallel estimation model with intelligent approximation and disturbance estimation is built to obtain the prediction error and in this way the composite law for weights updating is constructed. The nonlinear disturbance observer is developed using intelligent approximation information while the disturbance estimation is guaranteed to converge to a bounded compact set. The highlight is that different from previous work directly toward asymptotic stability, the transparency of the intelligent approximation and disturbance estimation is included in the control scheme. The uniformly ultimate boundedness stability is analyzed via Lyapunov method. Through simulation verification, the composite intelligent learning with disturbance observer can efficiently estimate the effect caused by system nonlinearity and disturbance while the proposed approach obtains better performance with higher accuracy.
Wake Vortex Transport in Proximity to the Ground
NASA Technical Reports Server (NTRS)
Hamilton, David W.; Proctor, Fred H.
2000-01-01
A sensitivity study for aircraft wake vortex transport has been conducted using a validated large eddy simulation (LES) model. The study assumes neutrally stratified and nonturbulent environments and includes the consequences of the ground. The numerical results show that the nondimensional lateral transport is primarily influenced by the magnitude of the ambient crosswind and is insensitive to aircraft type. In most of the simulations, the ground effect extends the lateral position of the downwind vortex about one initial vortex spacing (b(sub o)) in the downstream direction. Further extension by as much as one b(sub o) occurs when the downwind vortex remains 'in ground effect' (IGE) for relatively long periods of time. Results also show that a layer-averaged ambient wind velocity can be used to bound the time for lateral transport of wake vortices to insure safe operations on a parallel runway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Although detailed thermodynamic analyses of the 2-pK diffuse layer surface complexation model generally specify bound site activity coefficients for the purpose of accounting for those non-ideal excess free energies contributing to bound site electrochemical potentials, in applic...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. Moremore » recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.« less
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
ERIC Educational Resources Information Center
McPeake, John D.; And Others
1991-01-01
Describes adolescent chemical dependency treatment model developed at Beech Hill Hospital (New Hampshire) which integrated Twelve Step-oriented alcohol and drug rehabilitation program with experiential education school, Hurricane Island Outward Bound School. Describes Beech Hill Hurricane Island Outward Bound School Adolescent Chemical Dependency…
LISA pathfinder appreciably constrains collapse models
NASA Astrophysics Data System (ADS)
Helou, Bassam; Slagmolen, B. J. J.; McClelland, David E.; Chen, Yanbei
2017-04-01
Spontaneous collapse models are phenomological theories formulated to address major difficulties in macroscopic quantum mechanics. We place significant bounds on the parameters of the leading collapse models, the continuous spontaneous localization (CSL) model, and the Diosi-Penrose (DP) model, by using LISA Pathfinder's measurement, at a record accuracy, of the relative acceleration noise between two free-falling macroscopic test masses. In particular, we bound the CSL collapse rate to be at most (2.96 ±0.12 ) ×10-8 s-1 . This competitive bound explores a new frequency regime, 0.7 to 20 mHz, and overlaps with the lower bound 10-8 ±2 s-1 proposed by Adler in order for the CSL collapse noise to be substantial enough to explain the phenomenology of quantum measurement. Moreover, we bound the regularization cutoff scale used in the DP model to prevent divergences to be at least 40.1 ±0.5 fm , which is larger than the size of any nucleus. Thus, we rule out the DP model if the cutoff is the size of a fundamental particle.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Computing danger zones for provably safe closely spaced parallel approaches: Theory and experiment
NASA Astrophysics Data System (ADS)
Teo, Rodney
In poor visibility, paired approaches to airports with closely spaced parallel runways are not permitted, thus halving the arrival rate. With Global Positioning System technology, datalinks and cockpit displays, this could be averted. One important problem is ensuring safety during a blundered approach by one aircraft. This is on-going research. A danger zone around the blunderer is required. If the correct danger zone could be calculated, then it would be possible to get 100% of clear-day capacity in poor-visibility days even on 750 foot runways. The danger zones vary significantly (during an approach) and calculating them in real time would be very significant. Approximations (e.g. outer bounds) are not good enough. This thesis presents a way to calculate these danger zones in real time for a very broad class of blunder trajectories. The approach in this thesis differs from others in that it guarantees safety for any possible blunder trajectory as long as the speeds and turn rates of the blunder are within certain bounds. In addition, the approach considers all emergency evasive maneuvers whose speeds and turn rates are within certain bounds about a nominal emergency evasive maneuver. For all combinations of these blunder and evasive maneuver trajectories, it guarantees that the evasive maneuver is safe. For more than 1 million simulation runs, the algorithm shows a 100% rate of Successful Alerts and a 0% rate of Collisions Given an Alert. As an experimental testbed, two 10-ft wingspan fully autonomous unmanned aerial vehicles and a ground station are developed together with J. S. Jang. The development includes the design and flight testing of automatic controllers. The testbed is used to demonstrate the algorithm implementation through an autonomous closely spaced parallel approach, with one aircraft programmed to blunder. The other aircraft responds according to the result of the algorithm on board it and evades autonomously when required. This experimental demonstration is successfully conducted, showing the implementation of the algorithm, in particular, demonstrating that it can run in real time. Finally; with the necessary sensors and datalink, and the appropriate procedures in place, the algorithm developed in this thesis will enable 100% of clear-day capacity in poor-visibility days even on 750 foot runways.
Twisting, supercoiling and stretching in protein bound DNA
NASA Astrophysics Data System (ADS)
Lam, Pui-Man; Zhen, Yi
2018-04-01
We have calculated theoretical results for the torque and slope of the twisted DNA, with various proteins bound on it, using the Neukirch-Marko model, in the regime where plectonemes exist. We found that the torque in the protein bound DNA decreases compared to that in the bare DNA. This is caused by the decrease in the free energy g(f) , and hence the smaller persistence lengths, in the case of protein bound DNA. We hope our results will encourage experimental investigations of supercoiling in protein bound DNA, which can provide further tests of the Neukirch-Marko model.
NASA Astrophysics Data System (ADS)
Sawyer, Derek E.; Flemings, Peter B.; Dugan, Brandon; Germaine, John T.
2009-10-01
Clay-rich mass transport deposits (MTDs) in the Ursa Basin, Gulf of Mexico, record failures that mobilized along extensional failure planes and transformed into long runout flows. Failure proceeded retrogressively: scarp formation unloaded adjacent sediment causing extensional failure that drove successive scarp formation updip. This model is developed from three-dimensional seismic reflection data, core and log data from Integrated Ocean Drilling Project (IODP) Expedition 308, and triaxial shear experiments. MTDs are imaged seismically as low-amplitude zones above continuous, grooved, high-amplitude basal reflections and are characterized by two seismic facies. A Chaotic facies typifies the downdip interior, and a Discontinuous Stratified facies typifies the headwalls/sidewalls. The Chaotic facies contains discontinuous, high-amplitude reflections that correspond to flow-like features in amplitude maps: it has higher bulk density, resistivity, and shear strength, than bounding sediment. In contrast, the Discontinuous Stratified facies contains relatively dim reflections that abut against intact pinnacles of parallel-stratified reflections: it has only slightly higher bulk density, resistivity, and shear strength than bounding sediment, and deformation is limited. In both facies, densification is greatest at the base, resulting in a strong basal reflection. Undrained shear tests document strain weakening (sensitivity = 3). We estimate that failure at 30 meters below seafloor will occur when overpressure = 70% of the hydrostatic effective stress: under these conditions soil will liquefy and result in long runout flows.
Blakemore, James D.; Hull, Jonathan F.
2012-01-01
The speciation behavior of a water-soluble manganese(III) tetrasulfonated phthalocyanine complex was investigated with UV-visible and electron paramagnetic resonance (EPR) spectroscopies, as well as cyclic voltammetry. Parallel-mode EPR (in dimethylformamide:pyridine solvent mix) reveals a six-line hyperfine signal, centered at a g-value of 8.8, for the manganese(III) monomer, characteristic of the d4 S=2 system. The color of an aqueous solution containing the complex is dependent upon the pH of the solution; the phthalocyanine complex can exist as a water-bound monomer, a hydroxide-bound monomer, or an oxo-bridged dimer. Addition of coordinating bases such as borate or pyridine changes the speciation behavior by coordinating the manganese center. From the UV-visible spectra, complete speciation diagrams are plotted by global analysis of the pH-dependent UV-visible spectra, and a complete set of pKa values is obtained by fitting the data to a standard pKa model. Electrochemical studies reveal a pH-independent quasi-reversible oxidation event for the monomeric species, which likely involves oxidation of the organic ligand to the radical cation species. Adsorption of the phthalocyanine complex on the carbon working electrode was sometimes observed. The pKa values and electrochemistry data are discussed in the context of the development of mononuclear water-oxidation catalysts. PMID:22585306
Experiment E89-044 on the Quasielastic 3He(e,e'p) Reaction at Jefferson Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penel-Nottaris, Emilie
The Jefferson Lab Hall A E89-044 experiment has measured the 3He(e,e'p) reaction cross-sections. The extraction of the longitudinal and transverse response functions for the two-body break-up 3He(e,e'p)d reaction in parallel kinematics allows the study of the bound proton electromagnetic properties inside the 3He nucleus and the involved nuclear mechanisms beyond plane wave approximations.
Jeon, Eun-Ki; Jung, Ji-Min; Ryu, So-Ri; Baek, Kitae
2015-10-01
The applicability of an in situ electrokinetic process with a parallel electrode configuration was evaluated to treat an As-, Cu-, and Pb-contaminated paddy rice field in full scale (width, 17 m; length, 12.2 m; depth, 1.6 m). A constant voltage of 100 V was supplied and electrodes were spaced 2 m apart. Most As, Cu, and Pb were bound to Fe oxide and the major clay minerals in the test site were kaolinite and muscovite. The electrokinetic system removed 48.7, 48.9, and 54.5 % of As, Cu, and Pb, respectively, from the soil during 24 weeks. The removal of metals in the first layer (0-0.4 m) was higher than that in the other three layers because it was not influenced by groundwater fluctuation. Fractionation analysis showed that As and Pb bound to amorphous Fe and Al oxides decreased mainly, and energy consumption was 1.2 kWh/m(3). The standard deviation of metal concentration in the soil was much higher compared to the hexagonal electrode configuration because of a smaller electrical active area; however, the electrode configuration removed similar amounts of metals compared to the hexagonal system. From these results, it was concluded that the electrokinetic process could be effective at remediating As-, Cu-, and Pb-contaminated paddy rice field in situ.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution
ERIC Educational Resources Information Center
Verkuilen, Jay; Smithson, Michael
2012-01-01
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Bounded rationality alters the dynamics of paediatric immunization acceptance.
Oraby, Tamer; Bauch, Chris T
2015-06-02
Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure "rational" decision model that are often described as "bounded rationality". However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted.
Bounded rationality alters the dynamics of paediatric immunization acceptance
Oraby, Tamer; Bauch, Chris T.
2015-01-01
Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure “rational” decision model that are often described as “bounded rationality”. However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted. PMID:26035413
Nanohashtag structures based on carbon nanotubes and molecular linkers
NASA Astrophysics Data System (ADS)
Frye, Connor W.; Rybolt, Thomas R.
2018-03-01
Molecular mechanics was used to study the noncovalent interactions between single-walled carbon nanotubes and molecular linkers. Groups of nanotubes have the tendency to form tight, parallel bundles (||||). Molecular linkers were introduced into our models to stabilize nanostructures with carbon nanotubes held in perpendicular orientations. Molecular mechanics makes it possible to estimate the strength of noncovalent interactions holding these structures together and to calculate the overall binding energy of the structures. A set of linkers were designed and built around a 1,3,5,7-cyclooctatetraene tether with two corannulene containing pincers that extend in opposite directions from the central cyclooctatetraene portion. Each pincer consists of a pairs of "arms." These molecular linkers were modified so that the "hand" portions of each pair of "arms" could close together to grab and hold two carbon nanotubes in a perpendicular arrangement. To illustrate the possibility of more complicated and open perpendicular CNTs structures, our primary goal was to create a model of a nanohashtag (#) CNT conformation that is more stable than any parallel CNT arrangements with bound linker molecules forming clumps of CNTs and linkers in non-hashtag arrangements. This goal was achieved using a molecular linker (C280H96) that utilizes van der Waals interactions to two perpendicular oriented CNTs. Hydrogen bonding was then added between linker molecules to augment the stability of the hashtag structure. In the hashtag structure with hydrogen bonding, four (5,5) CNTs of length 4.46 nm (18 rings) and four linkers (C276H92N8O8) stabilized the hashtag so that the average binding energy per pincer was 118 kcal/mol.
Local SAR in parallel transmission pulse design.
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L; Adalsteinsson, Elfar
2012-06-01
The management of local and global power deposition in human subjects (specific absorption rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx radio frequency pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo magnetic resonance imaging scan. Additionally, the algorithm yields a protocol-specific ultimate peak in local SAR, which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7 Tesla eight-channel transmit array. The method reduced peak local 10 g SAR by 14-66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. Copyright © 2011 Wiley Periodicals, Inc.
Planck limits on non-canonical generalizations of large-field inflation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu
2017-04-01
In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less
NASA Astrophysics Data System (ADS)
Valchev, G. S.; Djondjorov, P. A.; Vassilev, V. M.; Dantchev, D. M.
2017-10-01
In the current article we study the behavior of the van der Waals force between a planar substrate and an axisymmetric bilayer lipid membrane undergoing pearling instability, caused by uniform hydrostatic pressure difference. To do so, the recently suggested "surface integration approach" is used, which can be considered a generalization of the well known and widely used Derjaguin approximation. The static equilibrium shape after the occurrence of the instability is described in the framework of Helfrich's spontaneous curvature model. Some specific classes of exact analytical solutions to the corresponding shape equation are considered, and the components of the respective position vectors given in terms of elliptic integrals and Jacobi elliptic functions. The mutual orientation between the interacting objects is chosen such that the axis of revolution of the distorted cylinder be parallel to the plane bounding the substrate. Based on the discussed models and approaches we made some estimations for the studied force in real experimentally realizable systems, thus showing the possibility of pearling as an useful technique for reduction of the adhesion in variety of industrial processes using lipid membranes as carriers.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
Influence of Parallel Dark Matter Sectors on Big Bang Nucleosynthesis
NASA Astrophysics Data System (ADS)
Challa, Venkata Sai Sreeharsha
Big Bang Nucleosynthesis (BBN) is a phenomenological theory that describes the synthesis of light nuclei after a few seconds of the cosmic time in the primordial universe. The twelve nuclear reactions in the first few seconds of the cosmic history are constrained by factors such as baryon to photon ratio, number of neutrino families, and present day element abundances. The belief that the expansion of the universe must be slowed down by gravity, was defeated by the recent observation of an accelerated expansion of the universe. Friedmann equations, which describe the cosmic dynamics, need to be revised considering also the existence of dark matter, another recent astronomical observation. The effects of multiple parallel universes of dark matter (dark sectors) on the accelerated expansion of the universe are studied. Collectively, these additional effects will lead to a new cosmological model. We had developed a numerical code on BBN to address the effects of such dark sectors on the abundances of all the light elements. We have studied the effect of degrees of freedom of dark-matter in the early universe on primordial abundances of light elements. The predicted abundances of light elements are compared with observed constraints to obtain bounds on the number of dark sectors, NDM. Comparison of the obtained results with the observations during the BBN epoch shows that the number of dark matter sectors are only loosely constrained, and the dark matter sectors are colder than the ordinary matter sectors. Also, we verified that the existence of parallel dark matter sectors with colder temperatures does not affect the constraints set by observations on the number of neutrino families, Nnu .
Updated RICE Bounds on Ultrahigh Energy Neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid; McKay, Douglas
2006-04-01
We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on, MD, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. We find bounds in the interval 0.9 TeV < MD < 10 TeV. Values d = 5, 6 and 7, for which laboratory and astrophysical bounds on LSG models are less restrictive, lead to essentially the same limits on MD.
Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Jun, E-mail: junsuzuki@uec.ac.jp
The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results ofmore » this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.« less
Zonal methods for the parallel execution of range-limited N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.
2007-01-20
Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua
2017-12-01
Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Influence of the extrinsic curvature on two-dimensional nematic films.
Napoli, Gaetano; Vergori, Luigi
2018-05-01
Nematic films are thin fluid structures, ideally two dimensional, endowed with an in-plane degenerate nematic order. In this paper we examine a generalization of the classical Plateau problem to an axisymmetric nematic film bounded by two coaxial parallel rings. At equilibrium, the shape of the nematic film results from the competition between surface tension, which favors the minimization of the area, and the nematic elasticity, which instead promotes the alignment of the molecules along a common direction. We find two classes of equilibrium solutions in which the molecules are uniformly aligned along the meridians or parallels. Depending on two dimensionless parameters, one related to the geometry of the film and the other to the constitutive moduli, the Gaussian curvature of the equilibrium shape may be everywhere negative, vanishing, or positive. The stability of these equilibrium configurations is investigated.
Efficient parallel algorithms for string editing and related problems
NASA Technical Reports Server (NTRS)
Apostolico, Alberto; Atallah, Mikhail J.; Larmore, Lawrence; Mcfaddin, H. S.
1988-01-01
The string editing problem for input strings x and y consists of transforming x into y by performing a series of weighted edit operations on x of overall minimum cost. An edit operation on x can be the deletion of a symbol from x, the insertion of a symbol in x or the substitution of a symbol x with another symbol. This problem has a well known O((absolute value of x)(absolute value of y)) time sequential solution (25). The efficient Program Requirements Analysis Methods (PRAM) parallel algorithms for the string editing problem are given. If m = ((absolute value of x),(absolute value of y)) and n = max((absolute value of x),(absolute value of y)), then the CREW bound is O (log m log n) time with O (mn/log m) processors. In all algorithms, space is O (mn).
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Influence of the extrinsic curvature on two-dimensional nematic films
NASA Astrophysics Data System (ADS)
Napoli, Gaetano; Vergori, Luigi
2018-05-01
Nematic films are thin fluid structures, ideally two dimensional, endowed with an in-plane degenerate nematic order. In this paper we examine a generalization of the classical Plateau problem to an axisymmetric nematic film bounded by two coaxial parallel rings. At equilibrium, the shape of the nematic film results from the competition between surface tension, which favors the minimization of the area, and the nematic elasticity, which instead promotes the alignment of the molecules along a common direction. We find two classes of equilibrium solutions in which the molecules are uniformly aligned along the meridians or parallels. Depending on two dimensionless parameters, one related to the geometry of the film and the other to the constitutive moduli, the Gaussian curvature of the equilibrium shape may be everywhere negative, vanishing, or positive. The stability of these equilibrium configurations is investigated.
Scott, Andrea Michalkova; Burns, Elizabeth A; Hill, Frances C
2014-08-01
The adsorption of nitrogen-containing compounds (NCCs) including 2,4,6-trinitrotoluene (TNT), 2,4-dinitrotoluene (DNT), 2,4-dinitroanisole (DNAN), and 3-nitro-1,2,4-triazol-5-one (NTO) on kaolinite surfaces was investigated. The M06-2X and M06-2X-D3 density functionals were applied with the cluster approximation. Several different positions of NCCs relative to the adsorption sites of kaolinite were examined, including NCCs in perpendicular and parallel orientation toward both surface models of kaolinite. The binding between the target molecules and kaolinite surfaces was analyzed and bond energies were calculated applying the atoms in molecules (AIM) method. All NCCs were found to prefer a parallel orientation toward both kaolinite surfaces, and were bound more strongly to the octahedral than to the tetrahedral site. TNT exhibited the strongest interaction with the octahedral surface and DNAN with the tetrahedral surface of kaolinite. Hydrogen bonding was shown to be the dominant non-covalent interaction for NCCs interacting with the octahedral surface of kaolinite with a small stabilizing effect of dispersion interactions. In the case of adsorption on the tetrahedral surface, kaolonite-NCC binding was shown to be governed by the balance between hydrogen bonds and dispersion forces. The presence of water as a solvent leads to a significant decrease in the adsorption strength for all studied NCCs interacting with both kaolinite surfaces.
LHC phenomenology of SO(10) models with Yukawa unification
NASA Astrophysics Data System (ADS)
Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart; Wingerter, Akın
2013-10-01
In this paper we study an SO(10) SUSY GUT with Yukawa unification for the third generation. We perform a global χ2 analysis given to obtain the GUT boundary conditions consistent with 11 low-energy observables, including the top, bottom and tau masses. We assume a universal mass, m16, for squarks and sleptons and a universal gaugino mass, M1/2. We then analyze the phenomenological consequences for the LHC for 15 benchmark models with fixed m16=20TeV and with varying values of the gluino mass. The goal of the present work is to (i) evaluate the lower bound on the gluino mass in our model coming from the most recent published data of CMS and (ii) to compare this bound with similar bounds obtained by CMS using simplified models. The bottom line is that the bounds coming from the same-sign dilepton analysis are comparable for our model and the simplified model studied assuming B(g˜→tt¯χ˜10)=100%. However the bounds coming from the purely hadronic analyses for our model are 10%-20% lower than obtained for the simplified models. This is due to the fact that for our models the branching ratio for the decay g˜→gχ˜1,20 is significant. Thus there are significantly fewer b-jets. We find a lower bound on the gluino mass in our models with Mg˜≳1000GeV. Finally, there is a theoretical upper bound on the gluino mass which increases with the value of m16. For m16≤30TeV, the gluino mass satisfies Mg˜≤2.8TeV at 90% C.L. Thus, unless we further increase the amount of fine-tuning, we expect gluinos to be discovered at LHC 14.
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.
2014-12-01
Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.
Silicon ribbon growth by a capillary action shaping technique
NASA Technical Reports Server (NTRS)
Schwuttke, G. H.; Ciszek, T. F.; Kran, A.
1976-01-01
The crystal growth method described is a capillary action shaping technique. Meniscus shaping for the desired ribbon geometry occurs at the vertex of a wettable die. As ribbon growth depletes the melt meniscus, capillary action supplies replacement material. A capillary die is so designed that the bounding edges of the die top are not parallel or concentric with the growing ribbon. The new dies allow a higher melt meniscus with concomitant improvements in surface smoothness and freedom from SiC surface particles, which can degrade perfection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Baltimore, Barbara G.; Malkin, Richard
1977-01-01
Dark-grown barley (Hordeum vulgare) etioplasts were examined for their content of membrane-bound iron-sulfur centers by electron paramagnetic resonance spectroscopy at 15K. They were found to contain the high potential iron-sulfur center characterized (in the reduced state) by an electron paramagnetic resonance g value of 1.89 (the “Rieske” center) but did not contain any low potential iron-sulfur centers. Per mole of cytochrome f, dark-grown etioplasts and fully developed chloroplasts had the same content of the Rieske center. During greening of etioplasts under continuous light, low potential bound iron-sulfur centers appear. In addition, the photosystem I reaction center, as measured by the photooxidation of P700 at 15K, also became functional; during greening the appearance of a photoreducible low potential iron-sulfur center paralleled the appearance of P700 photoactivity. These findings indicate the close association of the low potential iron-sulfur centers with the photosystem I reaction center; they also support the concept that the development of stable charge separation in the photosystem I reaction center requires, in addition to P700, a low potential iron-sulfur center. PMID:16660048
NASA Astrophysics Data System (ADS)
Bhattacharya, P.; Viesca, R. C.
2017-12-01
In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent bounds on in-situ fault properties like permeability, storage coefficient, resolved stresses, friction and the shear modulus, our results also show that fitting the complete observed time history of slip requires alternative model considerations, such as variations in fault mechanical properties or friction coefficient with slip.
Numerical Implementation of the Cohesive Soil Bounding Surface Plasticity Model. Volume I.
1983-02-01
AD-R24 866 NUMERICAL IMPLEMENTATION OF THE COHESIVE SOIL BOUNDING 1/2 SURFACE PLASTICITY ..(U) CALIFORNIA UNIV DAVIS DEPT OF CIVIL ENGINEERING L R...a study of various numerical means for implementing the bounding surface plasticity model for cohesive soils is presented. A comparison is made of... Plasticity Models 17 3.4 Selection Of Methods For Comparison 17 3.5 Theory 20 3.5.1 Solution Methods 20 3.5.2 Reduction Of The Number Of Equation
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
NASA Astrophysics Data System (ADS)
Bonaccorso, A.; Charity, R. J.; Kumar, R.; Salvioni, G.
2015-02-01
In this contribution, we will describe neutron and proton removal from 9C and 7Be which are two particularly interesting nuclei entering the nucleo-synthesis pp-chain [1, 2]. Neutron and proton removal reactions have been used in the past twenty years to probe the single-particle structure of exotic nuclei. The core parallel-momentum distribution can give information on the angular momentum and spin of the nucleon initial state while the total removal cross section is sensitive to the asymptotic part of the initial wave function and also to the reaction mechanism. Because knockout is a peripheral reaction from which the Asymptotic Normalization Constant (ANC) of the single-particle wave function can be extracted, it has been used as an indirect method to obtain the rate of reactions like 8B ( p ,γ)9C or 7Be ( p ,γ)8B . Nucleon removal has recently been applied by the HiRA collaboration [3] to situations in which the remaining "core" is beyond the drip line, such as 8C and 6Be , unbound by one or more protons, and whose excitation-energy spectrum can be obtained by the invariant-mass method. By gating on the ground-state peak, "core" parallel-momentum distributions and total knockout cross sections have been obtained similar to previous studies with well-bound "cores". In addition for each projectile, knock out to final bound states has also been obtained in several cases. We will report on the theoretical description and comparison to this experimental data for a few cases for which advances in the accuracy of the transfer-to-the continuum model [4, 5] have been made [6]. These include the use, when available, of "ab-initio" overlaps for the initial state [7] and in particular their ANC values [8]. Also, the construction of a nucleus-target folding potential for the treatment of the core-target S-matrix [9] using for the cores "ab-initio" densities [10] and state-of-the-art n-9Be optical potentials [11]. Preliminary results and open problems will be discussed.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2004-12-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2005-01-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Eavesdropping on spin waves inside the domain-wall nanochannel via three-magnon processes
NASA Astrophysics Data System (ADS)
Zhang, Beining; Wang, Zhenyu; Cao, Yunshan; Yan, Peng; Wang, X. R.
2018-03-01
One recent breakthrough in the field of magnonics is the experimental realization of reconfigurable spin-wave nanochannels formed by a magnetic domain wall with a width of 10-100 nm [Wagner et al., Nat. Nano. 11, 432 (2016), 10.1038/nnano.2015.339]. This remarkable progress enables an energy-efficient spin-wave propagation with a well-defined wave vector along its propagating path inside the wall. In the mentioned experiment, a microfocus Brillouin light scattering spectroscopy was taken in a line-scans manner to measure the frequency of the bounded spin wave. Due to their localization nature, the confined spin waves can hardly be detected from outside the wall channel, which guarantees the information security to some extent. In this work, we theoretically propose a scheme to detect/eavesdrop on the spin waves inside the domain-wall nanochannel via nonlinear three-magnon processes. We send a spin wave (ωi,ki) in one magnetic domain to interact with the bounded mode (ωb,kb) in the wall, where kb is parallel with the domain-wall channel defined as the z ̂ axis. Two kinds of three-magnon processes, i.e., confluence and splitting, are expected to occur. The confluence process is conventional: conservation of energy and momentum parallel with the wall indicates a transmitted wave in the opposite domain with ω (k ) =ωi+ωb and (ki+kb-k ) .z ̂=0 , while the momentum perpendicular to the domain wall is not necessary to be conserved due to the nonuniform internal field near the wall. We predict a stimulated three-magnon splitting (or "magnon laser") effect: the presence of a bound magnon propagating along the domain wall channel assists the splitting of the incident wave into two modes, one is ω1=ωb,k1=kb identical to the bound mode in the channel, and the other one is ω2=ωi-ωb with (ki-kb-k2) .z ̂=0 propagating in the opposite magnetic domain. Micromagnetic simulations confirm our theoretical analysis. These results demonstrate that one is able to uniquely infer the spectrum of the spin wave in the domain-wall nanochannel once we know both the injection and the transmitted waves.
Stationary and oscillatory bound states of dissipative solitons created by third-order dispersion
NASA Astrophysics Data System (ADS)
Sakaguchi, Hidetsugu; Skryabin, Dmitry V.; Malomed, Boris A.
2018-06-01
We consider the model of fiber-laser cavities near the zero-dispersion point, based on the complex Ginzburg-Landau equation with the cubic-quintic nonlinearity, including the third-order dispersion (TOD) term. It is well known that this model supports stable dissipative solitons. We demonstrate that the same model gives rise to several families of robust bound states of the solitons, which exists only in the presence of the TOD. There are both stationary and dynamical bound states, with oscillating separation between the bound solitons. Stationary states are multistable, corresponding to different values of the separation. With the increase of the TOD coefficient, the bound state with the smallest separation gives rise the oscillatory state through the Hopf bifurcation. Further growth of TOD leads to a bifurcation transforming the oscillatory limit cycle into a strange attractor, which represents a chaotically oscillating dynamical bound state. Families of multistable three- and four-soliton complexes are found too, the ones with the smallest separation between the solitons again ending by a transition to oscillatory states through the Hopf bifurcation.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions.
Moran, Rani; Zehetleitner, Michael; Liesefeld, Heinrich René; Müller, Hermann J; Usher, Marius
2016-10-01
Visual search is central to the investigation of selective visual attention. Classical theories propose that items are identified by serially deploying focal attention to their locations. While this accounts for set-size effects over a continuum of task difficulties, it has been suggested that parallel models can account for such effects equally well. We compared the serial Competitive Guided Search model with a parallel model in their ability to account for RT distributions and error rates from a large visual search data-set featuring three classical search tasks: 1) a spatial configuration search (2 vs. 5); 2) a feature-conjunction search; and 3) a unique feature search (Wolfe, Palmer & Horowitz Vision Research, 50(14), 1304-1311, 2010). In the parallel model, each item is represented by a diffusion to two boundaries (target-present/absent); the search corresponds to a parallel race between these diffusors. The parallel model was highly flexible in that it allowed both for a parametric range of capacity-limitation and for set-size adjustments of identification boundaries. Furthermore, a quit unit allowed for a continuum of search-quitting policies when the target is not found, with "single-item inspection" and exhaustive searches comprising its extremes. The serial model was found to be superior to the parallel model, even before penalizing the parallel model for its increased complexity. We discuss the implications of the results and the need for future studies to resolve the debate.
Deciphering the nonlocal entanglement entropy of fracton topological orders
NASA Astrophysics Data System (ADS)
Shi, Bowen; Lu, Yuan-Ming
2018-04-01
The ground states of topological orders condense extended objects and support topological excitations. This nontrivial property leads to nonzero topological entanglement entropy Stopo for conventional topological orders. Fracton topological order is an exotic class of models which is beyond the description of TQFT. With some assumptions about the condensates and the topological excitations, we derive a lower bound of the nonlocal entanglement entropy Snonlocal (a generalization of Stopo). The lower bound applies to Abelian stabilizer models including conventional topological orders as well as type-I and type-II fracton models, and it could be used to distinguish them. For fracton models, the lower bound shows that Snonlocal could obtain geometry-dependent values, and Snonlocal is extensive for certain choices of subsystems, including some choices which always give zero for TQFT. The stability of the lower bound under local perturbations is discussed.
The dynamics of aloof baby Skyrmions
Salmi, Petja; Sutcliffe, Paul
2016-01-25
The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)- dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper wemore » present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.« less
The dynamics of aloof baby Skyrmions
NASA Astrophysics Data System (ADS)
Salmi, Petja; Sutcliffe, Paul
2016-01-01
The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)-dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper we present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmi, Petja; Sutcliffe, Paul
The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)- dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper wemore » present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.« less
Development of a Wake Vortex Spacing System for Airport Capacity Enhancement and Delay Reduction
NASA Technical Reports Server (NTRS)
Hinton, David A.; OConnor, Cornelius J.
2000-01-01
The Terminal Area Productivity project has developed the technologies required (weather measurement, wake prediction, and wake measurement) to determine the aircraft spacing needed to prevent wake vortex encounters in various weather conditions. The system performs weather measurements, predicts bounds on wake vortex behavior in those conditions, derives safe wake spacing criteria, and validates the wake predictions with wake vortex measurements. System performance to date indicates that the potential runway arrival rate increase with Aircraft VOrtex Spacing System (AVOSS), considering common path effects and ATC delivery variance, is 5% to 12% depending on the ratio of large and heavy aircraft. The concept demonstration system, using early generation algorithms and minimal optimization, is performing the wake predictions with adequate robustness such that only 4 hard exceedances have been observed in 1235 wake validation cases. This performance demonstrates the feasibility of predicting wake behavior bounds with multiple uncertainties present, including the unknown aircraft weight and speed, weather persistence between the wake prediction and the observations, and the location of the weather sensors several kilometers from the approach location. A concept for the use of the AVOSS system for parallel runway operations has been suggested, and an initial study at the JFK International Airport suggests that a simplified AVOSS system can be successfully operated using only a single lidar as both the weather sensor and the wake validation instrument. Such a selfcontained AVOSS would be suitable for wake separation close to the airport, as is required for parallel approach concepts such as SOIA.
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
USDA-ARS?s Scientific Manuscript database
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.
2011-01-01
A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…
Multilayer insulation blanket, fabricating apparatus and method
Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.
1992-01-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Method of fabricating a multilayer insulation blanket
Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.
1993-01-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Method of fabricating a multilayer insulation blanket
Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.
1993-07-06
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.
Multilayer insulation blanket, fabricating apparatus and method
Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.
1992-09-01
An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel. 7 figs.
QoS support for end users of I/O-intensive applications using shared storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Marion Kei; Zhang, Xuechen; Jiang, Song
2011-01-19
I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less
Bailey, D S; Burke, J; Sinclair, R; Mukherjee, B B
1981-01-01
Glycoprotein biosynthesis was studied with mouse L-cells grown in suspension culture. Glucose-deprived cells incorporated [3H]mannose into 'high-mannose' protein-bound oligosaccharides and a few relatively high-molecular-weight lipid-linked oligosaccharides. The latter were retained by DEAE-cellulose and turned over quite slowly during pulse--chase experiments. Increased heterogeneity in size of lipid-linked oligosaccharides developed during prolonged glucose deprivation. Sequential elongation of lipid-linked oligosaccharides was also observed, and conditions that prevented the assembly of the higher lipid-linked oligosaccharides also prevented the formation of the larger protein-bound 'high-mannose' oligosaccharides. In parallel experiments, [3H]mannose was incorporated into a total polyribosome fraction, suggesting that mannose residues were transferred co-translationally to nascent protein. Membrane preparations from these cells catalysed the assembly from UDP-N-acetyl-D-[6-3H]glucosamine and GDP-D-[U-14C]mannose of polyisoprenyl diphosphate derivatives whose oligosaccharide moieties were heterogeneous in size. Elongation of the N-acetyl-D-[6-3H]glucosamine-initiated glycolipids with mannose residues produced several higher lipid-linked oligosaccharides similar to those seen during glucose deprivation in vivo. Glucosylation of these mannose-containing oligosaccharides from UDP-D-[6-3H]glucose was restricted to those of a relatively high molecular weight. Protein-bound saccharides formed in vitro were mainly smaller in size than those assembled on the lipid acceptors. These results support the involvement of lipid-linked saccharides in the synthesis of asparagine-linked glycoproteins, but show both in vivo and in vitro that protein-bound 'high-mannose' oligosaccharide formation can occur independently of higher lipid-linked oligosaccharide synthesis. PMID:7306042
NASA Astrophysics Data System (ADS)
Yang, Bo; Wu, R. R.; Rodgers, M. T.
2015-09-01
(CCG)n•(CGG)n trinucleotide repeats have been found to be associated with fragile X syndrome, the most widespread inherited cause of mental retardation in humans. The (CCG)n•(CGG)n repeats adopt i-motif conformations that are preferentially stabilized by base-pairing interactions of noncanonical proton-bound dimers of cytosine (C+•C). Halogenated cytosine residues are one form of DNA damage that may be important in altering the structure and stability of DNA or DNA-protein interactions and, hence, regulate gene expression. Previously, we investigated the effects of 5-halogenation and 1-methylation of cytosine on the base-pairing energies (BPEs) using threshold collision-induced dissociation (TCID) techniques. In the present study, we extend our work to include proton-bound homo- and heterodimers of cytosine, 1-methyl-5-fluorocytosine, and 1-methyl-5-bromocytosine. All modifications examined here are found to produce a decrease in the BPEs. However, the BPEs of all of the proton-bound dimers examined significantly exceed those of Watson-Crick G•C, neutral C•C base pairs, and various methylated variants such that DNA i-motif conformations should still be preserved in the presence of these modifications. The proton affinities (PAs) of the halogenated cytosines are also obtained from the experimental data by competitive analysis of the primary dissociation pathways that occur in parallel for the proton-bound heterodimers. 5-Halogenation leads to a decrease in the N3 PA of cytosine, whereas 1-methylation leads to an increase in the N3 PA. Thus, the 1-methyl-5-halocytosines exhibit PAs that are intermediate.
Plasma DNA aberrations in systemic lupus erythematosus revealed by genomic and methylomic sequencing
Chan, Rebecca W. Y.; Jiang, Peiyong; Peng, Xianlu; Tam, Lai-Shan; Liao, Gary J. W.; Li, Edmund K. M.; Wong, Priscilla C. H.; Sun, Hao; Chan, K. C. Allen; Chiu, Rossa W. K.; Lo, Y. M. Dennis
2014-01-01
We performed a high-resolution analysis of the biological characteristics of plasma DNA in systemic lupus erythematosus (SLE) patients using massively parallel genomic and methylomic sequencing. A number of plasma DNA abnormalities were found. First, aberrations in measured genomic representations (MGRs) were identified in the plasma DNA of SLE patients. The extent of the aberrations in MGRs correlated with anti-double–stranded DNA (anti-dsDNA) antibody level. Second, the plasma DNA of active SLE patients exhibited skewed molecular size-distribution profiles with a significantly increased proportion of short DNA fragments. The extent of plasma DNA shortening in SLE patients correlated with the SLE disease activity index (SLEDAI) and anti-dsDNA antibody level. Third, the plasma DNA of active SLE patients showed decreased methylation densities. The extent of hypomethylation correlated with SLEDAI and anti-dsDNA antibody level. To explore the impact of anti-dsDNA antibody on plasma DNA in SLE, a column-based protein G capture approach was used to fractionate the IgG-bound and non–IgG-bound DNA in plasma. Compared with healthy individuals, SLE patients had higher concentrations of IgG-bound DNA in plasma. More IgG binding occurs at genomic locations showing increased MGRs. Furthermore, the IgG-bound plasma DNA was shorter in size and more hypomethylated than the non–IgG-bound plasma DNA. These observations have enhanced our understanding of the spectrum of plasma DNA aberrations in SLE and may provide new molecular markers for SLE. Our results also suggest that caution should be exercised when interpreting plasma DNA-based noninvasive prenatal testing and cancer testing conducted for SLE patients. PMID:25427797
Complexity Bounds for Quantum Computation
2007-06-22
Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Hydrate morphology: Physical properties of sands with patchy hydrate saturation
Dai, S.; Santamarina, J.C.; Waite, William F.; Kneafsey, T.J.
2012-01-01
The physical properties of gas hydrate-bearing sediments depend on the volume fraction and spatial distribution of the hydrate phase. The host sediment grain size and the state of effective stress determine the hydrate morphology in sediments; this information can be used to significantly constrain estimates of the physical properties of hydrate-bearing sediments, including the coarse-grained sands subjected to high effective stress that are of interest as potential energy resources. Reported data and physical analyses suggest hydrate-bearing sands contain a heterogeneous, patchy hydrate distribution, whereby zones with 100% pore-space hydrate saturation are embedded in hydrate-free sand. Accounting for patchy rather than homogeneous hydrate distribution yields more tightly constrained estimates of physical properties in hydrate-bearing sands and captures observed physical-property dependencies on hydrate saturation. For example, numerical modeling results of sands with patchy saturation agree with experimental observation, showing a transition in stiffness starting near the series bound at low hydrate saturations but moving toward the parallel bound at high hydrate saturations. The hydrate-patch size itself impacts the physical properties of hydrate-bearing sediments; for example, at constant hydrate saturation, we find that conductivity (electrical, hydraulic and thermal) increases as the number of hydrate-saturated patches increases. This increase reflects the larger number of conductive flow paths that exist in specimens with many small hydrate-saturated patches in comparison to specimens in which a few large hydrate saturated patches can block flow over a significant cross-section of the specimen.
The Nature of Arsenic-Phytochelatin Complexes in Holcus lanatus and Pteris cretica1
Raab, Andrea; Feldmann, Jörg; Meharg, Andrew A.
2004-01-01
We have developed a method to extract and separate phytochelatins (PCs)—metal(loid) complexes using parallel metal(loid)-specific (inductively coupled plasma-mass spectrometry) and organic-specific (electrospray ionization-mass spectrometry) detection systems—and use it here to ascertain the nature of arsenic (As)-PC complexes in plant extracts. This study is the first unequivocal report, to our knowledge, of PC complex coordination chemistry in plant extracts for any metal or metalloid ion. The As-tolerant grass Holcus lanatus and the As hyperaccumulator Pteris cretica were used as model plants. In an in vitro experiment using a mixture of reduced glutathione (GS), PC2, and PC3, As preferred the formation of the arsenite [As(III)]-PC3 complex over GS-As(III)-PC2, As(III)-(GS)3, As(III)-PC2, or As(III)-(PC2)2 (GS: glutathione bound to arsenic via sulphur of cysteine). In H. lanatus, the As(III)-PC3 complex was the dominant complex, although reduced glutathione, PC2, and PC3 were found in the extract. P. cretica only synthesizes PC2 and forms dominantly the GS-As(III)-PC2 complex. This is the first evidence, to our knowledge, for the existence of mixed glutathione-PC-metal(loid) complexes in plant tissues or in vitro. In both plant species, As is dominantly in non-bound inorganic forms, with 13% being present in PC complexes for H. lanatus and 1% in P. cretica. PMID:15001701
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
Quantum vacua of 2d maximally supersymmetric Yang-Mills theory
NASA Astrophysics Data System (ADS)
Koloğlu, Murat
2017-11-01
We analyze the classical and quantum vacua of 2d N=(8,8) supersymmetric Yang-Mills theory with SU( N) and U( N) gauge group, describing the worldvolume interactions of N parallel D1-branes with flat transverse directions {R}^8 . We claim that the IR limit of the SU( N) theory in the superselection sector labeled M (mod N) — identified with the internal dynamics of ( M, N)-string bound states of the Type IIB string theory — is described by the symmetric orbifold N=(8,8) sigma model into ({R}^8)^{D-1}/S_D when D = gcd( M, N) > 1, and by a single massive vacuum when D = 1, generalizing the conjectures of E. Witten and others. The full worldvolume theory of the D1-branes is the U( N) theory with an additional U(1) 2-form gauge field B coming from the string theory Kalb-Ramond field. This U( N) + B theory has generalized field configurations, labeled by the Z-valued generalized electric flux and an independent {Z}_N -valued 't Hooft flux. We argue that in the quantum mechanical theory, the ( M, N)-string sector with M units of electric flux has a {Z}_N -valued discrete θ angle specified by M (mod N) dual to the 't Hooft flux. Adding the brane center-of-mass degrees of freedom to the SU( N) theory, we claim that the IR limit of the U( N) + B theory in the sector with M bound F-strings is described by the N=(8,8) sigma model into {Sym}^D({R}^8) . We provide strong evidence for these claims by computing an N=(8,8) analog of the elliptic genus of the UV gauge theories and of their conjectured IR limit sigma models, and showing they agree. Agreement is established by noting that the elliptic genera are modular-invariant Abelian (multi-periodic and meromorphic) functions, which turns out to be very restrictive.
Spread of entanglement and causality
NASA Astrophysics Data System (ADS)
Casini, Horacio; Liu, Hong; Mezei, Márk
2016-07-01
We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.
NASA Astrophysics Data System (ADS)
Moritz, J.; Faudot, E.; Devaux, S.; Heuraux, S.
2018-01-01
The plasma-wall transition is studied by means of a particle-in-cell (PIC) simulation in the configuration of a parallel to the wall magnetic field (B), with collisions between charged particles vs. neutral atoms taken into account. The investigated system consists of a plasma bounded by two absorbing walls separated by 200 electron Debye lengths (λd). The strength of the magnetic field is chosen such as the ratio λ d / r l , with rl being the electron Larmor radius, is smaller or larger than unity. Collisions are modelled with a simple operator that reorients randomly ion or electron velocity, keeping constant the total kinetic energy of both the neutral atom (target) and the incident charged particle. The PIC simulations show that the plasma-wall transition consists in a quasi-neutral region (pre-sheath), from the center of the plasma towards the walls, where the electric potential or electric field profiles are well described by an ambipolar diffusion model, and in a second region at the vicinity of the walls, called the sheath, where the quasi-neutrality breaks down. In this peculiar geometry of B and for a certain range of the mean-free-path, the sheath is found to be composed of two charged layers: the positive one, close to the walls, and the negative one, towards the plasma and before the neutral pre-sheath. Depending on the amplitude of B, the spatial variation of the electric potential can be non-monotonic and presents a maximum within the sheath region. More generally, the sheath extent as well as the potential drop within the sheath and the pre-sheath is studied with respect to B, the mean-free-path, and the ion and electron temperatures.
Forbrig, Enrico; Staffa, Jana K; Salewski, Johannes; Mroginski, Maria Andrea; Hildebrandt, Peter; Kozuch, Jacek
2018-02-13
Antimicrobial peptides (AMPs) are the first line of defense after contact of an infectious invader, for example, bacterium or virus, with a host and an integral part of the innate immune system of humans. Their broad spectrum of biological functions ranges from cell membrane disruption over facilitation of chemotaxis to interaction with membrane-bound or intracellular receptors, thus providing novel strategies to overcome bacterial resistances. Especially, the clarification of the mechanisms and dynamics of AMP incorporation into bacterial membranes is of high interest, and different mechanistic models are still under discussion. In this work, we studied the incorporation of the peptaibol alamethicin (ALM) into tethered bilayer lipid membranes on electrodes in combination with surface-enhanced infrared absorption (SEIRA) spectroscopy. This approach allows monitoring the spontaneous and potential-induced ion channel formation of ALM in situ. The complex incorporation kinetics revealed a multistep mechanism that points to peptide-peptide interactions prior to penetrating the membrane and adopting the transmembrane configuration. On the basis of the anisotropy of the backbone amide I and II infrared absorptions determined by density functional theory calculations, we employed a mathematical model to evaluate ALM reorientations monitored by SEIRA spectroscopy. Accordingly, ALM was found to adopt inclination angles of ca. 69°-78° and 21° in its interfacially adsorbed and transmembrane incorporated states, respectively. These orientations can be stabilized efficiently by the dipolar interaction with lipid head groups or by the application of a potential gradient. The presented potential-controlled mechanistic study suggests an N-terminal integration of ALM into membranes as monomers or parallel oligomers to form ion channels composed of parallel-oriented helices, whereas antiparallel oligomers are barred from intrusion.
Curcumin Binding to Beta Amyloid: A Computational Study.
Rao, Praveen P N; Mohamed, Tarek; Teckwani, Karan; Tin, Gary
2015-10-01
Curcumin, a chemical constituent present in the spice turmeric, is known to prevent the aggregation of amyloid peptide implicated in the pathophysiology of Alzheimer's disease. While curcumin is known to bind directly to various amyloid aggregates, no systematic investigations have been carried out to understand its ability to bind to the amyloid aggregates including oligomers and fibrils. In this study, we constructed computational models of (i) Aβ hexapeptide (16) KLVFFA(21) octamer steric-zipper β-sheet assembly and (ii) full-length Aβ fibril β-sheet assembly. Curcumin binding in these models was evaluated by molecular docking and molecular dynamics (MD) simulation studies. In both the models, curcumin was oriented in a linear extended conformation parallel to fiber axis and exhibited better stability in the Aβ hexapeptide (16) KLVFFA(21) octamer steric-zipper model (Ebinding = -10.05 kcal/mol) compared to full-length Aβ fibril model (Ebinding = -3.47 kcal/mol). Analysis of MD trajectories of curcumin bound to full-length Aβ fibril shows good stability with minimum Cα-atom RMSD shifts. Interestingly, curcumin binding led to marked fluctuations in the (14) HQKLVFFA(21) region that constitute the fibril spine with RMSF values ranging from 1.4 to 3.6 Å. These results show that curcumin binding to Aβ shifts the equilibrium in the aggregation pathway by promoting the formation of non-toxic aggregates. © 2015 John Wiley & Sons A/S.
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
Structure of suicide-inactivated. beta. -hydroxydecanoyl-thioester dehydrase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwab, J.M.; Ho, C.K.; Li, W.B.
..beta..-Hydroxydecanoylthioester dehydrase, the key enzyme in biosynthesis of unsaturated fatty acids under anaerobic conditions, equilibrates thioesters of (R)-3-hydroxydecanoic acid, E-2-decenoic acid, and Z-3-decenoic acid. Dehydrase is irreversibly inactivated by the N-acetylcysteamine thioester of 3-decynoic acid (3-decynoyl-NAC), via dehydrase-catalyzed isomerization to 2,3-decadienoyl-NAC. To probe the relationship between normal catalysis and suicide inactivation, the structure of the inactivated enzyme has been studied. 3-(2-/sup 13/C)Decynoyl-NAC was synthesized and incubated with dehydrase. /sup 13/C NMR showed that attack of 2,3-decadienoyl-NAC by the active site histidine gives 3-histidinyl-3-decenoyl-NAC, which slowly rearranges to the more stable ..delta../sup 2/ isomer. Model histidine-allene adducts have been made andmore » characterized. Analysis of NMR data show that the C=C configuration of the decenoyl moiety of enzyme-bound inactivator is E. The suggestion that the mechanism of dehydrase inactivation parallels its normal mechanism of action is supported these findings.« less
Effects of Planetary Gear Ratio on Mean Service Life
NASA Technical Reports Server (NTRS)
Savage, M.; Rubadeux, K. L.; Coe, H. H.
1996-01-01
Planetary gear transmissions are compact, high-power speed reductions which use parallel load paths. The range of possible reduction ratios is bounded from below and above by limits on the relative size of the planet gears. For a single plane transmission, the planet gear has no size at a ratio of two. As the ratio increases, so does the size of the planets relative to the sizes of the sun and ring. Which ratio is best for a planetary reduction can be resolved by studying a series of optimal designs. In this series, each design is obtained by maximizing the service life for a planetary with a fixed size, gear ratio, input speed power and materials. The planetary gear reduction service life is modeled as a function of the two-parameter Weibull distributed service lives of the bearings and gears in the reduction. Planet bearing life strongly influences the optimal reduction lives which point to an optimal planetary reduction ratio in the neighborhood of four to five.
Beall, Gary W.; Sowersby, Drew S.; Roberts, Rachel D.; Robson, Michael H.; Lewis, L. Kevin
2009-01-01
Smectite clays such as montmorillonite form complexes with a variety of biomolecules, including the nucleic acids DNA and RNA. Most previous studies of DNA adsorption onto clay have relied upon spectrophotometric analysis after separation of free nucleic acids from bound complexes by centrifugation. In the current work we demonstrate that such studies produce a consistent error due to (a) incomplete sedimentation of montmorillonite and (b) strong absorbance of the remaining clay at 260 nm. Clay sedimentation efficiency was strongly dependent upon cation concentration (Na+ or Mg2+) and on the level of dispersion of the original suspension. An improved clay:DNA adsorption assay was developed and utilized to assess the impact of metal counterions on binding of single-stranded DNA to montmorillonite. X-ray diffraction demonstrated, for the first time, formation of intercalated structures consistent with orientation of the DNA strands parallel to the clay surface. Observed gallery spacings were found to closely match values calculated utilizing atomistic modeling techniques. PMID:19061334
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
Coupling between diffusion and orientation of pentacene molecules on an organic surface.
Rotter, Paul; Lechner, Barbara A J; Morherr, Antonia; Chisnall, David M; Ward, David J; Jardine, Andrew P; Ellis, John; Allison, William; Eckhardt, Bruno; Witte, Gregor
2016-04-01
The realization of efficient organic electronic devices requires the controlled preparation of molecular thin films and heterostructures. As top-down structuring methods such as lithography cannot be applied to van der Waals bound materials, surface diffusion becomes a structure-determining factor that requires microscopic understanding. Scanning probe techniques provide atomic resolution, but are limited to observations of slow movements, and therefore constrained to low temperatures. In contrast, the helium-3 spin-echo (HeSE) technique achieves spatial and time resolution on the nm and ps scale, respectively, thus enabling measurements at elevated temperatures. Here we use HeSE to unveil the intricate motion of pentacene admolecules diffusing on a chemisorbed monolayer of pentacene on Cu(110) that serves as a stable, well-ordered organic model surface. We find that pentacene moves along rails parallel and perpendicular to the surface molecules. The experimental data are explained by admolecule rotation that enables a switching between diffusion directions, which extends our molecular level understanding of diffusion in complex organic systems.
Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls
NASA Technical Reports Server (NTRS)
Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk
1993-01-01
Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.
The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)
2001-01-01
A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.
Nearly Supersymmetric Dark Atoms
Behbahani, Siavosh R.; Jankowiak, Martin; Rube, Tomas; ...
2011-01-01
Theories of dark matter that support bound states are an intriguing possibility for the identity of the missing mass of the Universe. This article proposes a class of models of supersymmetric composite dark matter where the interactions with the Standard Model communicate supersymmetry breaking to the dark sector. In these models, supersymmetry breaking can be treated as a perturbation on the spectrum of bound states. Using a general formalism, the spectrum with leading supersymmetry effects is computed without specifying the details of the binding dynamics. The interactions of the composite states with the Standard Model are computed, and several benchmarkmore » models are described. General features of nonrelativistic supersymmetric bound states are emphasized.« less
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
On the likelihood of single-peaked preferences.
Lackner, Marie-Louise; Lackner, Martin
2017-01-01
This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Shu, Zhengyu; Lin, Hong; Shi, Shaolei; Mu, Xiangduo; Liu, Yanru; Huang, Jianzhong
2016-05-03
The whole-cell lipase from Burkholderia cepacia has been used as a biocatalyst in organic synthesis. However, there is no report in the literature on the component or the gene sequence of the cell-bound lipase from this species. Qualitative analysis of the cell-bound lipase would help to illuminate the regulation mechanism of gene expression and further improve the yield of the cell-bound lipase by gene engineering. Three predictive cell-bound lipases, lipA, lipC21 and lipC24, from Burkholderia sp. ZYB002 were cloned and expressed in E. coli. Both LipA and LipC24 displayed the lipase activity. LipC24 was a novel mesophilic enzyme and displayed preference for medium-chain-length acyl groups (C10-C14). The 3D structural model of LipC24 revealed the open Y-type active site. LipA displayed 96 % amino acid sequence identity with the known extracellular lipase. lipA-inactivation and lipC24-inactivation decreased the total cell-bound lipase activity of Burkholderia sp. ZYB002 by 42 % and 14 %, respectively. The cell-bound lipase activity from Burkholderia sp. ZYB002 originated from a multi-enzyme mixture with LipA as the main component. LipC24 was a novel lipase and displayed different enzymatic characteristics and structural model with LipA. Besides LipA and LipC24, other type of the cell-bound lipases (or esterases) should exist.
ERIC Educational Resources Information Center
Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.
2010-01-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…
Sprenger, K G; Pfaendtner, Jim
2016-06-07
Thermodynamic analyses can provide key insights into the origins of protein self-assembly on surfaces, protein function, and protein stability. However, obtaining quantitative measurements of thermodynamic observables from unbiased classical simulations of peptide or protein adsorption is challenging because of sampling limitations brought on by strong biomolecule/surface binding forces as well as time scale limitations. We used the parallel tempering metadynamics in the well-tempered ensemble (PTMetaD-WTE) enhanced sampling method to study the adsorption behavior and thermodynamics of several explicitly solvated model peptide adsorption systems, providing new molecular-level insight into the biomolecule adsorption process. Specifically studied were peptides LKα14 and LKβ15 and trpcage miniprotein adsorbing onto a charged, hydrophilic self-assembled monolayer surface functionalized with a carboxylic acid/carboxylate headgroup and a neutral, hydrophobic methyl-terminated self-assembled monolayer surface. Binding free energies were calculated as a function of temperature for each system and decomposed into their respective energetic and entropic contributions. We investigated how specific interfacial features such as peptide/surface electrostatic interactions and surface-bound ion content affect the thermodynamic landscape of adsorption and lead to differences in surface-bound conformations of the peptides. Results show that upon adsorption to the charged surface, configurational entropy gains of the released solvent molecules dominate the configurational entropy losses of the bound peptide. This behavior leads to an apparent increase in overall system entropy upon binding and therefore to the surprising and seemingly nonphysical result of an apparent increased binding free energy at elevated temperatures. Opposite effects and conclusions are found for the neutral surface. Additional simulations demonstrate that by adjusting the ionic strength of the solution, results that show the expected physical behavior, i.e., peptide binding strength that decreases with increasing temperature or is independent of temperature altogether, can be recovered on the charged surface. On the basis of this analysis, an overall free energy for the entire thermodynamic cycle for peptide adsorption on charged surfaces is constructed and validated with independent simulations.
NASA Astrophysics Data System (ADS)
Heath, B.; Hooft, E. E. E.; Toomey, D. R.; Papazachos, C. V.; Walls, K.; Paulatto, M.; Morgan, J. V.; Nomikou, P.; Warner, M.
2017-12-01
To investigate magmatic-tectonic interactions at an arc volcano, we collected a dense, active-source, seismic dataset across the Santorini Volcano, Greece, with 90 ocean bottom seismometers, 65 land seismometers, and 14,300 marine sound sources. We use over 140,000 travel-time picks to obtain a P-wave tomography model of the upper crustal structure of the Santorini volcano and surrounding tectonically extended region. Regionally, the shallow (<2 km) velocity structure is dominated by low- and high-velocity anomalies of several sediment-filled grabens and horsts of Attico-Cycladic metamorphic basement, which correlate well with Bouguer gravity anomalies and preliminary shallow attenuation results (using waveform amplitudes and t* values). We find regional Pliocene and younger faults bounding basement grabens and horsts to be predominately oriented in a NE-SW direction with Santorini itself located in a graben bounded by faults striking in this direction. In contrast, volcanic vents and dikes expressed at the surface seem to strike about 20° clockwise relative to these regional faults. In the northern caldera of Santorini, a 4-km wide region of anomalously low velocities and high attenuation directly overlies an inferred source of 2011-2012 inflation (4-4.5 km depth), however it is located at shallower depths ( 1-2km). The imaged low-velocity anomaly may correspond to hydrothermal activity (due to increased porosity and alteration) and/or brecciation from a prior episode of caldera collapse. It is bounded by anomalously fast velocities (at 1-2 km depth) that parallel the regional fault orientation and are correspondingly rotated 20° to surface dikes. At 4-5 km depth beneath the northern caldera basin, low-velocity anomalies and attenuated seismic arrivals provide preliminary evidence for a magma body; the low-velocity anomaly is elongated in the same direction as regional volcanic vents. The difference in strike of volcanic and tectonic features indicates oblique extension and potential time-variation in the minimum stress direction.
Dry Juan de Fuca slab revealed by quantification of water entering Cascadia subduction zone
NASA Astrophysics Data System (ADS)
Canales, J. P.; Carbotte, S. M.; Nedimovic, M. R.; Carton, H. D.
2017-12-01
Water is carried by subducting slabs as a pore fluid and in structurally bound minerals, yet no comprehensive quantification of water content and how it is stored and distributed at depth within incoming plates exists for any segment of the global subduction system. Here we use controlled-source seismic data collected in 2012 as part of the Ridge-to-Trench seismic experiment to quantify the amount of pore and structurally bound water in the Juan de Fuca plate entering the Cascadia subduction zone. We use wide-angle OBS seismic data along a 400-km-long margin-parallel profile 10-15 km seaward from the Cascadia deformation front to obtain P-wave tomography models of the sediments, crust, and uppermost mantle, and effective medium theory combined with a stochastic description of crustal properties (e.g., temperature, alteration assemblages, porosity, pore aspect ratio), to analyze the pore fluid and structurally bound water reservoirs in the sediments, crust and lithospheric mantle, and their variations along the Cascadia margin. Our results demonstrate that the Juan de Fuca lower crust and mantle are much drier than at any other subducting plate, with most of the water stored in the sediments and upper crust. Previously documented, variable but limited bend faulting along the margin, which correlates with degree of plate locking, limits slab access to water, and a warm thermal structure resulting from a thick sediment cover and young plate age prevents significant serpentinization of the mantle. Our results have important implications for a number of subduction processes at Cascadia, such as: (1) the dryness of the lower crust and mantle indicates that fluids that facilitate episodic tremor and slip must be sourced from the subducted upper crust; (2) decompression rather than hydrous melting must dominate arc magmatism in northern-central Cascadia; and (3) dry subducted lower crust and mantle can explain the low levels of intermediate-depth seismicity in the Juan de Fuca slab.
NASA Astrophysics Data System (ADS)
Averkin, Sergey N.; Gatsonis, Nikolaos A.
2018-06-01
An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization efficiency investigation. Results show that the EUPIC has efficiency of more than 80% when the simulation is performed on a single CPU from a non-uniform memory access node and the efficiency is decreasing as the number of threads further increases. The EUPIC is applied to the simulation of the multi-species plasma flow over a geometrically complex CubeSat in Low Earth Orbit. The EUPIC potential and flowfield distribution around the CubeSat exhibit features that are consistent with previous simulations over simpler geometrical bodies.
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...
2016-03-29
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
Thermal Destruction Of CB Contaminants Bound On Building ...
Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.
Conformational phases of membrane bound cytoskeletal filaments
NASA Astrophysics Data System (ADS)
Quint, David A.; Grason, Gregory; Gopinathan, Ajay
2013-03-01
Membrane bound cytoskeletal filaments found in living cells are employed to carry out many types of activities including cellular division, rigidity and transport. When these biopolymers are bound to a membrane surface they may take on highly non-trivial conformations as compared to when they are not bound. This leads to the natural question; What are the important interactions which drive these polymers to particular conformations when they are bound to a surface? Assuming that there are binding domains along the polymer which follow a periodic helical structure set by the natural monomeric handedness, these bound conformations must arise from the interplay of the intrinsic monomeric helicity and membrane binding. To probe this question, we study a continuous model of an elastic filament with intrinsic helicity and map out the conformational phases of this filament for various mechanical and structural parameters in our model, such as elastic stiffness and intrinsic twist of the filament. Our model allows us to gain insight into the possible mechanisms which drive real biopolymers such as actin and tubulin in eukaryotes and their prokaryotic cousins MreB and FtsZ to take on their functional conformations within living cells.
Mapping trace element distribution in fossil teeth and bone with LA-ICP-MS
NASA Astrophysics Data System (ADS)
Hinz, E. A.; Kohn, M. J.
2009-12-01
Trace element profiles were measured in fossil bones and teeth from the late Pleistocene (c. 25 ka) Merrell locality, Montana, USA, by using laser-ablation ICP-MS. Laser-ablation ICP-MS can collect element counts along predefined tracks on a sample’s surface using a constant ablation speed allowing for rapid spatial sampling of element distribution. Key elements analyzed included common divalent cations (e.g. Sr, Zn, Ba), a suite of REE (La, Ce, Nd, Sm, Eu, Yb), and U, in addition to Ca for composition normalization and standardization. In teeth, characteristic diffusion penetration distances for all trace elements are at least a factor of 4 greater in traverses parallel to the dentine-enamel interface (parallel to the growth axis of the tooth) than perpendicular to the interface. Multiple parallel traverses in sections parallel and perpendicular to the tooth growth axis were transformed into trace element maps, and illustrate greater uptake of all trace elements along the central axis of dentine compared to areas closer to enamel, or within the enamel itself. Traverses in bone extending from the external surface, through the thickness of cortical bone and several mm into trabecular bone show major differences in trace element uptake compared to teeth: U and Sr are homogeneous, whereas all REE show a kinked profile with high concentrations on outer surfaces that decrease by several orders of magnitude within a few mm inward. The Eu anomaly increases uniformly from the outer edge of bone inward, whereas the Ce anomaly decreases slightly. These observations point to major structural anisotropies in trace element transport and uptake during fossilization, yet transport and uptake of U and REE are not resolvably different. In contrast, transport and uptake of U in bone must proceed orders of magnitude faster than REE as U is homogeneous whereas REE exhibit strong gradients. The kinked REE profiles in bone unequivocally indicate differential transport rates, consistent with a double-medium diffusion model in which microdomains with slow diffusivities are bounded by fast-diffusing pathways.
NASA Technical Reports Server (NTRS)
Elston, W. E.
1984-01-01
Voyager 1 images show 14 volcanic centers wholly or partly within the Kane Patera quadrangle of Io, which are divided into four major classes: (1) shield with parallel flows; (2) shield with early radial fan shapd flows; (3) shield with radial fan shaped flows, surfaces of flows textured with longitudinal ridges; and (4) depression surrounded by plateau-forming scarp-bounded, untextured deposits. The interpretation attempted here hinges largely on the ability to distinguish lava flows from pyroclastic flows by remote sensing.
From Genes to Protein Mechanics on a Chip
Milles, Lukas F.; Verdorfer, Tobias; Pippig, Diana A.; Nash, Michael A.; Gaub, Hermann E.
2014-01-01
Single-molecule force spectroscopy enables mechanical testing of individual proteins, however low experimental throughput limits the ability to screen constructs in parallel. We describe a microfluidic platform for on-chip protein expression and measurement of single-molecule mechanical properties. We constructed microarrays of proteins covalently attached to a chip surface, and found that a single cohesin-modified cantilever that bound to the terminal dockerin-tag of each protein remained stable over thousands of pulling cycles. The ability to synthesize and mechanically probe protein libraries presents new opportunities for high-throughput mechanical phenotyping. PMID:25194847
Structure of a short-chain dehydrogenase/reductase from Bacillus anthracis
Hou, Jing; Wojciechowska, Kamila; Zheng, Heping; Chruszcz, Maksymilian; Cooper, David R.; Cymborowski, Marcin; Skarina, Tatiana; Gordon, Elena; Luo, Haibin; Savchenko, Alexei; Minor, Wladek
2012-01-01
The crystal structure of a short-chain dehydrogenase/reductase from Bacillus anthracis strain ‘Ames Ancestor’ complexed with NADP has been determined and refined to 1.87 Å resolution. The structure of the enzyme consists of a Rossmann fold composed of seven parallel β-strands sandwiched by three α-helices on each side. An NADP molecule from an endogenous source is bound in the conserved binding pocket in the syn conformation. The loop region responsible for binding another substrate forms two perpendicular short helices connected by a sharp turn. PMID:22684058
Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica
2017-06-29
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
Implementing Multidisciplinary and Multi-Zonal Applications Using MPI
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.
1995-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. Unfortunately, simple message passing models, like Intel's NX library, only allow point-to-point and global communication within a single system-defined partition. This makes implementation of these applications quite difficult, if not impossible. In this report it is shown that the new Message Passing Interface (MPI) standard is a viable portable library for implementing the message passing portion of multidisciplinary applications. Further, with the extension of a portable loader, fully portable multidisciplinary application programs can be developed. Finally, the performance of MPI is compared to that of some native message passing libraries. This comparison shows that MPI can be implemented to deliver performance commensurate with native message libraries.
Raman spectroscopy: in vivo quick response code of skin physiological status
NASA Astrophysics Data System (ADS)
Vyumvuhore, Raoul; Tfayli, Ali; Piot, Olivier; Le Guillou, Maud; Guichard, Nathalie; Manfait, Michel; Baillet-Guffroy, Arlette
2014-11-01
Dermatologists need to combine different clinically relevant characteristics for a better understanding of skin health. These characteristics are usually measured by different techniques, and some of them are highly time consuming. Therefore, a predicting model based on Raman spectroscopy and partial least square (PLS) regression was developed as a rapid multiparametric method. The Raman spectra collected from the five uppermost micrometers of 11 healthy volunteers were fitted to different skin characteristics measured by independent appropriate methods (transepidermal water loss, hydration, pH, relative amount of ceramides, fatty acids, and cholesterol). For each parameter, the obtained PLS model presented correlation coefficients higher than R2=0.9. This model enables us to obtain all the aforementioned parameters directly from the unique Raman signature. In addition to that, in-depth Raman analyses down to 20 μm showed different balances between partially bound water and unbound water with depth. In parallel, the increase of depth was followed by an unfolding process of the proteins. The combinations of all these information led to a multiparametric investigation, which better characterizes the skin status. Raman signal can thus be used as a quick response code (QR code). This could help dermatologic diagnosis of physiological variations and presents a possible extension to pathological characterization.
Raman spectroscopy: in vivo quick response code of skin physiological status.
Vyumvuhore, Raoul; Tfayli, Ali; Piot, Olivier; Le Guillou, Maud; Guichard, Nathalie; Manfait, Michel; Baillet-Guffroy, Arlette
2014-01-01
Dermatologists need to combine different clinically relevant characteristics for a better understanding of skin health. These characteristics are usually measured by different techniques, and some of them are highly time consuming. Therefore, a predicting model based on Raman spectroscopy and partial least square (PLS) regression was developed as a rapid multiparametric method. The Raman spectra collected from the five uppermost micrometers of 11 healthy volunteers were fitted to different skin characteristics measured by independent appropriate methods (transepidermal water loss, hydration, pH, relative amount of ceramides, fatty acids, and cholesterol). For each parameter, the obtained PLS model presented correlation coefficients higher than R2=0.9. This model enables us to obtain all the aforementioned parameters directly from the unique Raman signature. In addition to that, in-depth Raman analyses down to 20 μm showed different balances between partially bound water and unbound water with depth. In parallel, the increase of depth was followed by an unfolding process of the proteins. The combinations of all these information led to a multiparametric investigation, which better characterizes the skin status. Raman signal can thus be used as a quick response code (QR code). This could help dermatologic diagnosis of physiological variations and presents a possible extension to pathological characterization.
Karasick, Michael S.; Strip, David R.
1996-01-01
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
Two-polariton bound states in the Jaynes-Cummings-Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Max T. C.; Law, C. K.
2011-05-15
We examine the eigenstates of the one-dimensional Jaynes-Cummings-Hubbard model in the two-excitation subspace. We discover that two-excitation bound states emerge when the ratio of vacuum Rabi frequency to the tunneling rate between cavities exceeds a critical value. We determine the critical value as a function of the quasimomentum quantum number, and indicate that the bound states carry a strong correlation in which the two polaritons appear to be spatially confined together.
NASA Astrophysics Data System (ADS)
Stefferson, Michael W.; Norris, Samantha L.; Vernerey, Franck J.; Betterton, Meredith D.; E Hough, Loren
2017-08-01
Crowded environments modify the diffusion of macromolecules, generally slowing their movement and inducing transient anomalous subdiffusion. The presence of obstacles also modifies the kinetics and equilibrium behavior of tracers. While previous theoretical studies of particle diffusion have typically assumed either impenetrable obstacles or binding interactions that immobilize the particle, in many cellular contexts bound particles remain mobile. Examples include membrane proteins or lipids with some entry and diffusion within lipid domains and proteins that can enter into membraneless organelles or compartments such as the nucleolus. Using a lattice model, we studied the diffusive movement of tracer particles which bind to soft obstacles, allowing tracers and obstacles to occupy the same lattice site. For sticky obstacles, bound tracer particles are immobile, while for slippery obstacles, bound tracers can hop without penalty to adjacent obstacles. In both models, binding significantly alters tracer motion. The type and degree of motion while bound is a key determinant of the tracer mobility: slippery obstacles can allow nearly unhindered diffusion, even at high obstacle filling fraction. To mimic compartmentalization in a cell, we examined how obstacle size and a range of bound diffusion coefficients affect tracer dynamics. The behavior of the model is similar in two and three spatial dimensions. Our work has implications for protein movement and interactions within cells.
Ficko, Bradley W; NDong, Christian; Giacometti, Paolo; Griswold, Karl E; Diamond, Solomon G
2017-05-01
Magnetic nanoparticles (MNPs) are an emerging platform for targeted diagnostics in cancer. An important component needed for translation of MNPs is the detection and quantification of targeted MNPs bound to tumor cells. This study explores the feasibility of a multifrequency nonlinear magnetic spectroscopic method that uses excitation and pickup coils and is capable of discriminating between quantities of bound and unbound MNPs in 0.5 ml samples of KB and Igrov human cancer cell lines. The method is tested over a range of five concentrations of MNPs from 0 to 80 μg/ml and five concentrations of cells from 50 to 400 000 count per ml. A linear model applied to the magnetic spectroscopy data was able to simultaneously measure bound and unbound MNPs with agreement between the model-fit and lab assay measurements (p < 0.001). The detectable iron of the presented method to bound and unbound MNPs was < 2 μg in a 0.5 ml sample. The linear model parameters used to determine the quantities of bound and unbound nanoparticles in KB cells were also used to measure the bound and unbound MNP in the Igrov cell line and vice versa. Nonlinear spectroscopic measurement of MNPs may be a useful method for studying targeted MNPs in oncology. Determining the quantity of bound and unbound MNP in an unknown sample using a linear model represents an exciting opportunity to translate multifrequency nonlinear spectroscopy methods to in vivo applications where MNPs could be targeted to cancer cells.
Harris, Greg M.; Shazly, Tarek; Jabbarzadeh, Ehsan
2013-01-01
Significant effort has gone towards parsing out the effects of surrounding microenvironment on macroscopic behavior of stem cells. Many of the microenvironmental cues, however, are intertwined, and thus, further studies are warranted to identify the intricate interplay among the conflicting downstream signaling pathways that ultimately guide a cell response. In this contribution, by patterning adhesive PEG (polyethylene glycol) hydrogels using Dip Pen Nanolithography (DPN), we demonstrate that substrate elasticity, subcellular elasticity, ligand density, and topography ultimately define mesenchymal stem cells (MSCs) spreading and shape. Physical characteristics are parsed individually with 7 kilopascal (kPa) hydrogel islands leading to smaller, spindle shaped cells and 105 kPa hydrogel islands leading to larger, polygonal cell shapes. In a parallel effort, a finite element model was constructed to characterize and confirm experimental findings and aid as a predictive tool in modeling cell microenvironments. Signaling pathway inhibition studies suggested that RhoA is a key regulator of cell response to the cooperative effect of the tunable substrate variables. These results are significant for the engineering of cell-extra cellular matrix interfaces and ultimately decoupling matrix bound cues presented to cells in a tissue microenvironment for regenerative medicine. PMID:24282570
Load and Pi control flux through the branched kinetic cycle of myosin V.
Kad, Neil M; Trybus, Kathleen M; Warshaw, David M
2008-06-20
Myosin V is a processive actin-based motor protein that takes multiple 36-nm steps to deliver intracellular cargo to its destination. In the laser trap, applied load slows myosin V heavy meromyosin stepping and increases the probability of backsteps. In the presence of 40 mm phosphate (P(i)), both forward and backward steps become less load-dependent. From these data, we infer that P(i) release commits myosin V to undergo a highly load-dependent transition from a state in which ADP is bound to both heads and its lead head trapped in a pre-powerstroke conformation. Increasing the residence time in this state by applying load increases the probability of backstepping or detachment. The kinetics of detachment indicate that myosin V can detach from actin at two distinct points in the cycle, one of which is turned off by the presence of P(i). We propose a branched kinetic model to explain these data. Our model includes P(i) release prior to the most load-dependent step in the cycle, implying that P(i) release and load both act as checkpoints that control the flux through two parallel pathways.
NASA Astrophysics Data System (ADS)
Ohdaira, Tetsushi
2014-07-01
Previous studies discussing cooperation employ the best decision that every player knows all information regarding the payoff matrix and selects the strategy of the highest payoff. Therefore, they do not discuss cooperation based on the altruistic decision with limited information (bounded rational altruistic decision). In addition, they do not cover the case where every player can submit his/her strategy several times in a match of the game. This paper is based on Ohdaira's reconsideration of the bounded rational altruistic decision, and also employs the framework of the prisoner's dilemma game (PDG) with sequential strategy. The distinction between this study and the Ohdaira's reconsideration is that the former covers the model of multiple groups, but the latter deals with the model of only two groups. Ohdaira's reconsideration shows that the bounded rational altruistic decision facilitates much more cooperation in the PDG with sequential strategy than Ohdaira and Terano's bounded rational second-best decision does. However, the detail of cooperation of multiple groups based on the bounded rational altruistic decision has not been resolved yet. This study, therefore, shows how randomness in the network composed of multiple groups affects the increase of the average frequency of mutual cooperation (cooperation between groups) based on the bounded rational altruistic decision of multiple groups. We also discuss the results of the model in comparison with related studies which employ the best decision.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
A model for optimizing file access patterns using spatio-temporal parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk
2013-01-01
For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
National Centers for Environmental Prediction
Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
Ceccon, Alberto; Schmidt, Thomas; Tugarinov, Vitali; Kotler, Samuel A; Schwieters, Charles D; Clore, G Marius
2018-05-23
Lipid-based micellar nanoparticles promote aggregation of huntingtin exon-1 peptides. Here we characterize the interaction of two such peptides, htt NT Q 7 and htt NT Q 10 comprising the N-terminal amphiphilic domain of huntingtin followed by 7 and 10 glutamine repeats, respectively, with 8 nm lipid micelles using NMR chemical exchange saturation transfer (CEST), circular dichroism and pulsed Q-band EPR. Exchange between free and micelle-bound htt NT Q n peptides occurs on the millisecond time scale with a K D ∼ 0.5-1 mM. Upon binding micelles, residues 1-15 adopt a helical conformation. Oxidation of Met 7 to a sulfoxide reduces the binding affinity for micelles ∼3-4-fold and increases the length of the helix by a further two residues. A structure of the bound monomer unit is calculated from the backbone chemical shifts of the micelle-bound state obtained from CEST. Pulsed Q-band EPR shows that a monomer-dimer equilibrium exists on the surface of the micelles and that the two helices of the dimer adopt a parallel orientation, thereby bringing two disordered polyQ tails into close proximity which may promote aggregation upon dissociation from the micelle surface.
NASA Astrophysics Data System (ADS)
Sun, Wen-Rong; Tian, Bo; Wang, Yu-Feng; Zhen, Hui-Ling
2015-06-01
Three-coupled fourth-order nonlinear Schrödinger equations describe the dynamics of alpha helical proteins with the interspine coupling at the higher order. Through symbolic computation and binary Bell-polynomial approach, bilinear forms and N-soliton solutions for such equations are constructed. Key point lies in the introduction of auxiliary functions in the Bell-polynomial expression. Asymptotic analysis is applied to investigate the elastic interaction between the two solitons: two solitons keep their original amplitudes, energies and velocities invariant after the interaction except for the phase shifts. Soliton amplitudes are related to the energy distributed in the solitons of the three spines. Overtaking interaction, head-on interaction and bound-state solitons of two solitons are given. Bound states of three bright solitons arise when all of them propagate in parallel. Elastic interaction between the bound-state solitons and one bright soliton is shown. Increase of the lattice parameter can lead to the increase of the soliton velocity, that is, the interaction period becomes shorter. The solitons propagating along the neighbouring spines are found to interact elastically. Those solitons, exhibited in this paper, might be viewed as a possible carrier of bio-energy transport in the protein molecules.
Auger mediated positron sticking on graphene and highly oriented pyrolytic graphite
NASA Astrophysics Data System (ADS)
Chirayath, V. A.; Chrysler, M.; McDonald, A.; Lim, Z.; Shastry, K.; Gladen, R.; Fairchild, A.; Koymen, A.; Weiss, A.
Positron annihilation induced Auger electron spectroscopy (PAES) measurements on 6-8 layers graphene grown on polycrystalline copper and the measurements on a highly oriented pyrolytic graphite (HOPG) sample have indicated the presence of a bound surface state for positrons. Measurements carried out with positrons of kinetic energies lower than the electron work function for graphene or HOPG have shown emission of low energy electrons possible only through the Auger mediated positron sticking (AMPS) process. In this process the positron makes a transition from a positive energy scattering state to a bound surface state. The transition energy is coupled to a valence electron which may then have enough energy to get ejected from the sample surface. The positrons which are bound to surface state are highly localized in a direction perpendicular to surface and delocalized parallel to it which makes this process highly surface sensitive and can thus be used for characterizing graphene or graphite surfaces for open volume defects and surface impurities. The measurements have also shown an extremely large low energy tail for the C KVV Auger transition at 263eV indicative of another physical process for low energy emission. This work was supported by NSF Grant No. DMR 1508719 and DMR 1338130.
NASA Technical Reports Server (NTRS)
Lou, John; Ferraro, Robert; Farrara, John; Mechoso, Carlos
1996-01-01
An analysis is presented of several factors influencing the performance of a parallel implementation of the UCLA atmospheric general circulation model (AGCM) on massively parallel computer systems. Several modificaitons to the original parallel AGCM code aimed at improving its numerical efficiency, interprocessor communication cost, load-balance and issues affecting single-node code performance are discussed.
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
NASA Astrophysics Data System (ADS)
Bo, Zhang; Jinjiang, Zhang; Shuyu, Yan; Jiang, Liu; Jinhai, Zhang; Zhongpei, Zhang
2010-05-01
The phenomenon of Kink banding is well known throughout the engineering and geophysical sciences. Associated with layered structures compressed in a layer-parallel direction, it arises for example in stratified geological systems under tectonic compression. Our work documented it is also possible to develop super large-scale kink-bands in sedimentary sequences. We interpret the Bachu fold uplift belt of the central Tarim basin in western China to be composed of detachment folds flanked by megascopic-scale kink-bands. Those previous principal fold models for the Bachu uplift belt incorporated components of large-scale thrust faulting, such as the imbricate fault-related fold model and the high-angle, reverse-faulted detachment fold model. Based on our observations in the outcrops and on the two-dimension seismic profiles, we interpret that first-order structures in the region are kink-band style detachment folds to accommodate regional shortening, and thrust faulting can be a second-order deformation style occurring on the limb of the detachment folds or at the cores of some folds to accommodate the further strain of these folds. The belt mainly consists of detachment folds overlying a ductile decollement layer. The crests of the detachment folds are bounded by large-scale kink-bands, which are zones of angularly folded strata. These low-signal-tonoise, low-reflectivity zones observed on seismic profiles across the Bachu belt are poorly imaged sections, which resulted from steeply dipping bedding in the kink-bands. The substantial width (beyond 200m) of these low-reflectivity zones, their sub-parallel edges in cross section, and their orientations at a high angle to layering between 50 and 60 degrees, as well as their conjugate geometry, support a kink-band interpretation. The kink-band interpretation model is based on the Maximum Effective Moment Criteria for continuous deformation, rather than Mohr-Column Criteria for brittle fracture. Seismic modeling is done to identify the characteristics and natures of seismic waves within the kink-band and its fold structure, which supplies the further evidences for the kink-band interpretation in the region.
NASA Astrophysics Data System (ADS)
Georgiev, K.; Zlatev, Z.
2010-11-01
The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.
RICE bounds on cosmogenic neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid
2005-04-01
Assuming standard model interactions we calculate shower rates induced by cosmogenic neutrinos in ice, and we bound the cosmogenic neutrino fluxes using RICE 2000-2004 results. Next we assume new interactions due to extra- dimensional, low-scale gravity (i.e. black hole production and decay; graviton mediated deep inelastic scattering) and calculate enhanced shower rates induced by cosmogenic neutrinos in ice. With the help of RICE 2000-2004 results, we survey bounds on low scale gravity parameters for a range of cosmogenic neutrino flux models.
On bound-states of the Gross Neveu model with massive fundamental fermions
NASA Astrophysics Data System (ADS)
Frishman, Yitzhak; Sonnenschein, Jacob
2018-01-01
In the search for QFT's that admit boundstates, we reinvestigate the two dimensional Gross-Neveu model, but with massive fermions. By computing the self-energy for the auxiliary boundstate field and the effective potential, we show that there are no bound states around the lowest minimum, but there is a meta-stable bound state around the other minimum, a local one. The latter decays by tunneling. We determine the dependence of its lifetime on the fermion mass and coupling constant.
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
NASA Astrophysics Data System (ADS)
Santos, Jander P.; Sá Barreto, F. C.
2016-01-01
Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.
An OpenACC-Based Unified Programming Model for Multi-accelerator Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S
2015-01-01
This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.
NASA Astrophysics Data System (ADS)
Eslami, Ghiyam; Esmaeilzadeh, Esmaeil; Pérez, Alberto T.
2016-10-01
Up and down motion of a spherical conductive particle in dielectric viscous fluid driven by a DC electric field between two parallel electrodes was investigated. A nonlinear differential equation, governing the particle dynamics, was derived, based on Newton's second law of mechanics, and solved numerically. All the pertaining dimensionless groups were extracted. In contrast to similar previous works, hydrodynamic interaction between the particle and the electrodes, as well as image electric forces, has been taken into account. Furthermore, the influence of the microdischarge produced between the electrodes and the approaching particle on the particle dynamics has been included in the model. The model results were compared with experimental data available in the literature, as well as with some additional experimental data obtained through the present study showing very good agreement. The results indicate that the wall hydrodynamic effect and the dielectric liquid ionic conductivity are very dominant factors determining the particle trajectory. A lower bound is derived for the charge transferred to the particle while rebounding from an electrode. It is found that the time and length scales of the post-microdischarge motion of the particle can be as small as microsecond and micrometer, respectively. The model is able to predict the so called settling/dwelling time phenomenon for the first time.
Aggregating quantum repeaters for the quantum internet
NASA Astrophysics Data System (ADS)
Azuma, Koji; Kato, Go
2017-09-01
The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.
New tetrameric forms of the rotavirus NSP4 with antiparallel helices.
Kumar, Sushant; Ramappa, Raghavendra; Pamidimukkala, Kiranmayee; Rao, C D; Suguna, K
2018-06-01
Rotavirus nonstructural protein 4, the first viral enterotoxin to be identified, is a multidomain, multifunctional glycoprotein. Earlier, we reported a Ca 2+ -bound coiled-coil tetrameric structure of the diarrhea-inducing region of NSP4 from the rotavirus strains SA11 and I321 and a Ca 2+ -free pentameric structure from the rotavirus strain ST3, all with a parallel arrangement of α-helices. pH was found to determine the oligomeric state: a basic pH favoured a tetramer, whereas an acidic pH favoured a pentamer. Here, we report two novel forms of the coiled-coil region of NSP4 from the bovine rotavirus strains MF66 and NCDV. These crystallized at acidic pH, forming antiparallel coiled-coil tetrameric structures without any bound Ca 2+ ion. Structural and mutational studies of the coiled-coil regions of NSP4 revealed that the nature of the residue at position 131 (Tyr/His) plays an important role in the observed structural diversity.
Rovibrational bound states of SO2 isotopologues. II: Total angular momentum J = 11-20
NASA Astrophysics Data System (ADS)
Kumar, Praveen; Poirier, Bill
2015-11-01
In a two-part series, the rovibrational bound states of SO2 are investigated in comprehensive detail, for all four stable sulfur isotopes 32-34,36S. All low-lying rovibrational energy levels-both permutation-symmetry-allowed and not allowed-are computed, for all values of total angular momentum in the range J = 0-20. The calculations have carried out using the ScalIT suite of parallel codes. The present study (Paper II) examines the J = 11-20 rovibrational levels, providing symmetry and rovibrational labels for every computed state, relying on a new lambda-doublet splitting technique to make completely unambiguous assignments. Isotope shifts are analyzed, as is the validity of ;J-shifting; as a predictor of rotational fine structure. Among other ramifications, this work will facilitate understanding of mass-independent fractionation of sulfur isotopes (S-MIF) observed in the Archean rock record-particularly as this may have arisen from self shielding. S-MIF, in turn is highly relevant in the broader context of understanding the ;oxygen revolution;.
Rovibrational bound states of SO2 isotopologues. I: Total angular momentum J = 0-10
NASA Astrophysics Data System (ADS)
Kumar, Praveen; Ellis, Joseph; Poirier, Bill
2015-04-01
Isotopic variation of the rovibrational bound states of SO2 for the four stable sulfur isotopes 32-34,36S is investigated in comprehensive detail. In a two-part series, we compute the low-lying energy levels for all values of total angular momentum in the range J = 0-20. All rovibrational levels are computed, to an extremely high level of numerical convergence. The calculations have been carried out using the ScalIT suite of parallel codes. The present study (Paper I) examines the J = 0-10 rovibrational levels, providing unambiguous symmetry and rovibrational label assignments for each computed state. The calculated vibrational energy levels exhibit very good agreement with previously reported experimental and theoretical data. Rovibrational energy levels, calculated without any Coriolis approximations, are reported here for the first time. Among other potential ramifications, this data will facilitate understanding of the origin of mass-independent fractionation of sulfur isotopes in the Archean rock record-of great relevance for understanding the "oxygen revolution".
Han, Xiaomeng; Zhou, Zhen; Mei, Xiaojie; Ma, Yan; Xie, Zhenfang
2018-02-01
In order to investigate effects of waste activated sludge (WAS) fermentation liquid on anoxic/oxic- membrane bioreactor (A/O-MBR), two A/O-MBRs with and without WAS fermentation liquid addition were operated in parallel. Results show that addition of WAS fermentation liquid clearly improved denitrification efficiency without deterioration of nitrification, while severe membrane fouling occurred. WAS fermentation liquid resulted in an elevated production of proteins and humic acids in bound extracellular polymeric substance (EPS) and release of organic matter with high MW fractions in soluble microbial product (SMP) and loosely bound EPS (LB-EPS). Measurement of deposition rate and fluid structure confirmed increased fouling potential of SMP and LB-EPS. γ-Proteobacteria and Ferruginibacter, which can secrete and export EPS, were also found to be abundant in the MBR with WAS fermentation liquid. It is implied that when WAS fermentation liquid was applied, some operational steps to control membrane fouling should be employed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computing row and column counts for sparse QR and LU factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.
2001-01-01
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less
An embedded multi-core parallel model for real-time stereo imaging
NASA Astrophysics Data System (ADS)
He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu
2018-04-01
The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.
Fang, Guor-Cheng; Lin, Yen-Heng; Zheng, Yu-Cheng
2016-02-01
The main purpose of this study was to monitor ambient air particles and particulate-bound mercury Hg(p) in total suspended particulate (TSP) concentrations and dry deposition at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling sites during the daytime and nighttime, from 2011 to 2012. In addition, the calculated/measured dry deposition flux ratios of ambient air particles and particulate-bound mercury Hg(p) were also studied with Baklanov & Sorensen and the Williams models. For a particle size of 10 μm, the Baklanov & Sorensen model yielded better predictions of dry deposition of ambient air particulates and particulate-bound mercury Hg(p) at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling site during the daytime and nighttime sampling periods. However, for particulates with sizes 20-23 μm, the results obtained in the study reveal that the Williams model provided better prediction results for ambient air particulates and particulate-bound mercury Hg(p) at all sampling sites in this study.
Are stock prices too volatile to be justified by the dividend discount model?
NASA Astrophysics Data System (ADS)
Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ
2007-03-01
This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.
Sekerková, Gabriella; Zheng, Lili; Loomis, Patricia A.; Changyaleket, Benjarat; Whitlon, Donna S.; Mugnaini, Enrico; Bartles, James R.
2010-01-01
Espins are associated with the parallel actin bundles of hair cell stereocilia and are the target of mutations that cause deafness and vestibular dysfunction in mice and humans. Here, we report that espins are also concentrated in the microvilli of a number of other sensory cells: vomeronasal organ sensory neurons, solitary chemoreceptor cells, taste cells and Merkel cells. Moreover, we show that hair cells and these other sensory cells contain novel espin isoforms that arise from a different transcriptional start site and differ significantly from other espin isoforms in their complement of ligand-binding activities and their effects on actin polymerization. The novel espin isoforms of sensory cells bundled actin filaments with high affinity in a Ca2+-resistant fashion, bound actin monomer via a WASP homology 2 domain, bound profilin via a single proline-rich peptide, and caused a dramatic elongation of microvillus-type parallel actin bundles in transfected epithelial cells. In addition, the novel espin isoforms of sensory cells differed from other espin isoforms in that they potently inhibited actin polymerization in vitro, did not bind the Src homology 3 domain of the adapter protein insulin receptor substrate p53 and did not bind the acidic, signaling phospholipid phosphatidylinositol 4,5- bisphosphate. Thus, the espins constitute a family of multifunctional actin cytoskeletal regulatory proteins with the potential to differentially influence the organization, dimensions, dynamics and signaling capabilities of the actin filament-rich, microvillus-type specializations that mediate sensory transduction in a variety of mechanosensory and chemosensory cells. PMID:15190118
Abbas, Ash Mohammad
2012-01-01
In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.
Absolute Lower Bound on the Bounce Action
NASA Astrophysics Data System (ADS)
Sato, Ryosuke; Takimoto, Masahiro
2018-03-01
The decay rate of a false vacuum is determined by the minimal action solution of the tunneling field: bounce. In this Letter, we focus on models with scalar fields which have a canonical kinetic term in N (>2 ) dimensional Euclidean space, and derive an absolute lower bound on the bounce action. In the case of four-dimensional space, we show the bounce action is generically larger than 24 /λcr, where λcr≡max [-4 V (ϕ )/|ϕ |4] with the false vacuum being at ϕ =0 and V (0 )=0 . We derive this bound on the bounce action without solving the equation of motion explicitly. Our bound is derived by a quite simple discussion, and it provides useful information even if it is difficult to obtain the explicit form of the bounce solution. Our bound offers a sufficient condition for the stability of a false vacuum, and it is useful as a quick check on the vacuum stability for given models. Our bound can be applied to a broad class of scalar potential with any number of scalar fields. We also discuss a necessary condition for the bounce action taking a value close to this lower bound.
A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation
Davidovits, Seth; Fisch, Nathaniel J.
2017-03-31
The energy in turbulent flow can be amplied by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound givenmore » here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. As a result, the technique used highlights the relationship between compressed turbulence and decaying turbulence.« less
Do Reuss and Voigt Bounds Really Bound in High-Pressure Rheology Experiments?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen,J.; Li, L.; Yu, T.
2006-01-01
Energy dispersive synchrotron x-ray diffraction is carried out to measure differential lattice strains in polycrystalline Fe{sub 2}SiO{sub 4} (fayalite) and MgO samples using a multi-element solid state detector during high-pressure deformation. The theory of elastic modeling with Reuss (iso-stress) and Voigt (iso-strain) bounds is used to evaluate the aggregate stress and weight parameter, {alpha} (0{le}{alpha}{le}1), of the two bounds. Results under the elastic assumption quantitatively demonstrate that a highly stressed sample in high-pressure experiments reasonably approximates to an iso-stress state. However, when the sample is plastically deformed, the Reuss and Voigt bounds are no longer valid ({alpha} becomes beyond 1).more » Instead, if plastic slip systems of the sample are known (e.g. in the case of MgO), the aggregate property can be modeled using a visco-plastic self-consistent theory.« less
Systematic assignment of Feshbach resonances via an asymptotic bound state model
NASA Astrophysics Data System (ADS)
Goosen, Maikel; Kokkelmans, Servaas
2008-05-01
We present an Asymptotic Bound state Model (ABM), which is useful to predict Feshbach resonances. The model utilizes asymptotic properties of the interaction potentials to represent coupled molecular wavefunctions. The bound states of this system give rise to Feshbach resonances, localized at the magnetic fields of intersection of these bound states with the scattering threshold. This model was very successful to assign measured Feshbach resonances in an ultra cold mixture of ^6Li and ^40K atomsootnotetextE. Wille, F.M. Spiegelhalder, G. Kerner, D. Naik, A. Trenkwalder, G. Hendl, F. Schreck, R. Grimm, T.G. Tiecke, J.T.M. Walraven, S.J.J.M.F. Kokkelmans, E. Tiesinga, P.S. Julienne, arXiv:0711.2916. For this system, the accuracy of the determined scattering lengths is comparable to full coupled channels results. However, it was not possible to predict the width of the resonances. We discuss how an incorporation of threshold effects will improve the model, and we apply it to a mixture of ^87Rb and ^133Cs atoms, where recently Feshbach resonances have been measured.
Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model
NASA Astrophysics Data System (ADS)
Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin
In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Efficiency and its bounds for a quantum Einstein engine at maximum power.
Yan, H; Guo, Hao
2012-11-01
We study a quantum thermal engine model for which the heat transfer law is determined by Einstein's theory of radiation. The working substance of the quantum engine is assumed to be a two-level quantum system of which the constituent particles obey Maxwell-Boltzmann (MB), Fermi-Dirac (FD), or Bose-Einstein (BE) distributions, respectively, at equilibrium. The thermal efficiency and its bounds at maximum power of these models are derived and discussed in the long and short thermal contact time limits. The similarity and difference between these models are discussed. We also compare the efficiency bounds of this quantum thermal engine to those of its classical counterpart.
NASA Astrophysics Data System (ADS)
Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.
2018-05-01
In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.
NASA Astrophysics Data System (ADS)
Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.
2018-01-01
In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.
Karasick, M.S.; Strip, D.R.
1996-01-30
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling.
Yan, H; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines. © 2012 American Physical Society
Information models of software productivity - Limits on productivity growth
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
Turning Around along the Cosmic Web
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Yepes, Gustavo
2016-12-01
A bound violation designates a case in which the turnaround radius of a bound object exceeds the upper limit imposed by the spherical collapse model based on the standard ΛCDM paradigm. Given that the turnaround radius of a bound object is a stochastic quantity and that the spherical model overly simplifies the true gravitational collapse, which actually proceeds anisotropically along the cosmic web, the rarity of the occurrence of a bound violation may depend on the web environment. Assuming a Planck cosmology, we numerically construct the bound-zone peculiar velocity profiles along the cosmic web (filaments and sheets) around the isolated groups with virial mass {M}{{v}}≥slant 3× {10}13 {h}-1 {M}⊙ identified in the Small MultiDark Planck simulations and determine the radial distances at which their peculiar velocities equal the Hubble expansion speed as the turnaround radii of the groups. It is found that although the average turnaround radii of the isolated groups are well below the spherical bound limit on all mass scales, the bound violations are not forbidden for individual groups, and the cosmic web has an effect of reducing the rarity of the occurrence of a bound violation. Explaining that the spherical bound limit on the turnaround radius in fact represents the threshold distance up to which the intervention of the external gravitational field in the bound-zone peculiar velocity profiles around the nonisolated groups stays negligible, we discuss the possibility of using the threshold distance scale to constrain locally the equation of state of dark energy.
The Incidence of Sixteenth Century Cosmic Models in Modern Texts
NASA Astrophysics Data System (ADS)
Maene, S. A.; Best, J. S.; Usher, P. D.
1999-12-01
In the sixteenth century, the bounded cosmological models of Copernicus (1543) and Tycho Brahe (1588), and the unbounded model of Thomas Digges (1576), vied with the bounded geocentric model of Ptolemy (c. 140 AD). The work of the philosopher Giordano Bruno in 1584 lent further support to the Digges model. Despite the eventual acceptance of the unbounded universe, analysis of over 100 modern introductory astronomy texts reveals that these early unbounded models are mentioned infrequently. The ratio of mentions of Digges' model to Copernicus' model has the surprisingly low value of R = 0.08. The philosophical speculation of Bruno receives mention more than twice as often (R = 0.17). The expectation that these early unbounded models warrant inclusion in astronomy texts is supported both by modern hindsight and by the literature of the time. In Shakespeare's "Hamlet" of c. 1601, Prince Hamlet suffers from two transformations. According to the cosmic allegorical model, one transformation changes the bounded geocentricism of Ptolemy to the bounded heliocentricism of Copernicus, while the other completes the change to Digges' model of the infinite universe of suns. This interpretation and the modern world view suggest that both transformations should receive equal mention and thus that the ratio R in introductory texts should be close to unity. This work was supported in part by the NASA West Virginia Space Grant Consortium.
Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.
2010-01-01
This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469
Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-04-06
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.
Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems
Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-01-01
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722
Retargeting of existing FORTRAN program and development of parallel compilers
NASA Technical Reports Server (NTRS)
Agrawal, Dharma P.
1988-01-01
The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.
The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.
Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto
2013-01-01
After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.
ERIC Educational Resources Information Center
Kessler, Lawrence M.
2013-01-01
In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Structured Uncertainty Bound Determination From Data for Control and Performance Validation
NASA Technical Reports Server (NTRS)
Lim, Kyong B.
2003-01-01
This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.
Equivalence principle and bound kinetic energy.
Hohensee, Michael A; Müller, Holger; Wiringa, R B
2013-10-11
We consider the role of the internal kinetic energy of bound systems of matter in tests of the Einstein equivalence principle. Using the gravitational sector of the standard model extension, we show that stringent limits on equivalence principle violations in antimatter can be indirectly obtained from tests using bound systems of normal matter. We estimate the bound kinetic energy of nucleons in a range of light atomic species using Green's function Monte Carlo calculations, and for heavier species using a Woods-Saxon model. We survey the sensitivities of existing and planned experimental tests of the equivalence principle, and report new constraints at the level of between a few parts in 10(6) and parts in 10(8) on violations of the equivalence principle for matter and antimatter.
The Quantum Measurement Problem and Physical reality: A Computation Theoretic Perspective
NASA Astrophysics Data System (ADS)
Srikanth, R.
2006-11-01
Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, empirical evidence suggests that NP-complete problems are intractable in the physical world. Likewise, computational problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, finds a natural explanation if the universe is assumed to be algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of a probabilistic Turing machine. This assumption can be reconciled with the observed exponentiality of quantum systems at microscopic scales, and the consequent possibility of implementing Shor's quantum polynomial time algorithm at that scale, provided the degree of superposition is intrinsically, finitely upper-bounded. If this bound is associated with the quantum-classical divide (the Heisenberg cut), a natural resolution to the quantum measurement problem arises. From this viewpoint, macroscopic classicality is an evidence that the universe is in BPP, and both questions raised above receive affirmative answers. A recently proposed computational model of quantum measurement, which relates the Heisenberg cut to the discreteness of Hilbert space, is briefly discussed. A connection to quantum gravity is noted. Our results are compatible with the philosophy that mathematical truths are independent of the laws of physics.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
Fully Parallel MHD Stability Analysis Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2014-10-01
Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
A Model-Free No-arbitrage Price Bound for Variance Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu
2013-08-01
We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.
Bounded Parametric Model Checking for Elementary Net Systems
NASA Astrophysics Data System (ADS)
Knapik, Michał; Szreter, Maciej; Penczek, Wojciech
Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.
Coexistence of bounded and unbounded motions in a bouncing ball model
NASA Astrophysics Data System (ADS)
Marò, Stefano
2013-05-01
We consider the model describing the vertical motion of a ball falling with constant acceleration on a wall and elastically reflected. The wall is supposed to move in the vertical direction according to a given periodic function f. We apply the Aubry-Mather theory to the generating function in order to prove the existence of bounded motions with prescribed mean time between the bounces. As the existence of unbounded motions is known, it is possible to find a class of functions f that allow both bounded and unbounded motions.
Re-derived overclosure bound for the inert doublet model
NASA Astrophysics Data System (ADS)
Biondini, S.; Laine, M.
2017-08-01
We apply a formalism accounting for thermal effects (such as modified Sommerfeld effect; Salpeter correction; decohering scatterings; dissociation of bound states), to one of the simplest WIMP-like dark matter models, associated with an "inert" Higgs doublet. A broad temperature range T ˜ M/20 . . . M/104 is considered, stressing the importance and less-understood nature of late annihilation stages. Even though only weak interactions play a role, we find that resummed real and virtual corrections increase the tree-level overclosure bound by 1 . . . 18%, depending on quartic couplings and mass splittings.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
Curvature bound from gravitational catalysis
NASA Astrophysics Data System (ADS)
Gies, Holger; Martini, Riccardo
2018-04-01
We determine bounds on the curvature of local patches of spacetime from the requirement of intact long-range chiral symmetry. The bounds arise from a scale-dependent analysis of gravitational catalysis and its influence on the effective potential for the chiral order parameter, as induced by fermionic fluctuations on a curved spacetime with local hyperbolic properties. The bound is expressed in terms of the local curvature scalar measured in units of a gauge-invariant coarse-graining scale. We argue that any effective field theory of quantum gravity obeying this curvature bound is safe from chiral symmetry breaking through gravitational catalysis and thus compatible with the simultaneous existence of chiral fermions in the low-energy spectrum. With increasing number of dimensions, the curvature bound in terms of the hyperbolic scale parameter becomes stronger. Applying the curvature bound to the asymptotic safety scenario for quantum gravity in four spacetime dimensions translates into bounds on the matter content of particle physics models.
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Single product lot-sizing on unrelated parallel machines with non-decreasing processing times
NASA Astrophysics Data System (ADS)
Eremeev, A.; Kovalyov, M.; Kuznetsov, P.
2018-01-01
We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.
Salinity transfer in double diffusive convection bounded by two parallel plates
NASA Astrophysics Data System (ADS)
Yang, Yantao; van der Poel, Erwin P.; Ostilla-Monico, Rodolfo; Sun, Chao; Verzicco, Roberto; Grossmann, Siegfried; Lohse, Detlef
2014-11-01
The double diffusive convection (DDC) is the convection flow with the fluid density affected by two different components. In this study we numerically investigate DDC between two parallel plates with no-slip boundary conditions. The top plate has higher salinity and temperature than the lower one. Thus the flow is driven by the salinity difference and stabilised by the temperature difference. Our simulations are compared with the experiments by Hage and Tilgner (Phys. Fluids 22, 076603 (2010)) for several sets of parameters. Reasonable agreement is achieved for the salinity flux and its dependence on the salinity Rayleigh number. For all parameters considered, salt fingers emerge and extend through the entire domain height. The thermal Rayleigh number shows minor influence on the salinity flux although it does affect the Reynolds number. We apply the Grossmann-Lohse theory for Rayleigh-Bénard flow to the current problem without introducing any new coefficients. The theory successfully predicts the salinity flux with respect to the scaling for both the numerical and experimental results.
NASA Astrophysics Data System (ADS)
Yue, Chao; An, Xin; Bortnik, Jacob; Ma, Qianli; Li, Wen; Thorne, Richard M.; Reeves, Geoffrey D.; Gkioulidou, Matina; Mitchell, Donald G.; Kletzing, Craig A.
2016-08-01
Plasma kinetic theory predicts that a sufficiently anisotropic electron distribution will excite whistler mode waves, which in turn relax the electron distribution in such a way as to create an upper bound on the relaxed electron anisotropy. Here using whistler mode chorus wave and plasma measurements by Van Allen Probes, we confirm that the electron distributions are well constrained by this instability to a marginally stable state in the whistler mode chorus waves generation region. Lower band chorus waves are organized by the electron β∥e into two distinct groups: (i) relatively large-amplitude, quasi-parallel waves with β∥e≳0.025 and (ii) relatively small-amplitude, oblique waves with β∥e≲0.025. The upper band chorus waves also have enhanced amplitudes close to the instability threshold, with large-amplitude waves being quasi-parallel whereas small-amplitude waves being oblique. These results provide important insight for studying the excitation of whistler mode chorus waves.
Parallel Distractor Rejection as a Binding Mechanism in Search
Dent, Kevin; Allen, Harriet A.; Braithwaite, Jason J.; Humphreys, Glyn W.
2012-01-01
The relatively common experimental visual search task of finding a red X amongst red O’s and green X’s (conjunction search) presents the visual system with a binding problem. Illusory conjunctions (ICs) of features across objects must be avoided and only features present in the same object bound together. Correct binding into unique objects by the visual system may be promoted, and ICs minimized, by inhibiting the locations of distractors possessing non-target features (e.g., Treisman and Sato, 1990). Such parallel rejection of interfering distractors leaves the target as the only item competing for selection; thus solving the binding problem. In the present article we explore the theoretical and empirical basis of this process of active distractor inhibition in search. Specific experiments that provide strong evidence for a process of active distractor inhibition in search are highlighted. In the final part of the article we consider how distractor inhibition, as defined here, may be realized at a neurophysiological level (Treisman and Sato, 1990). PMID:22908002
Investigation of wall-bounded turbulence over sparsely distributed roughness
NASA Astrophysics Data System (ADS)
Placidi, Marco; Ganapathisubramani, Bharath
2011-11-01
The effects of sparsely distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Particle Image Velocimetry (PIV) experiments in a wind tunnel. From the literature, the best way to characterise a rough wall, especially one where the density of roughness elements is sparse, is unclear. In this study, rough surfaces consisting of sparsely and uniformly distributed LEGO® blocks are used. Five different patterns are adopted in order to examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area), plan solidity (λp, plan area of roughness elements per unit wall-parallel area) and the geometry of the roughness element (square and cylindrical elements), on the turbulence structure. The Karman number, Reτ , has been matched, at the value of approximately 2300, in order to compare across the different cases. In the talk, we will present detailed analysis of mean and rms velocity profiles, Reynolds stresses and quadrant decomposition.
Partitioning and packing mathematical simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Arpasi, D. J.; Milner, E. J.
1986-01-01
The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.
Three holes bound to a double acceptor - Be(+) in germanium
NASA Technical Reports Server (NTRS)
Haller, E. E.; Mcmurray, R. E., Jr.; Falicov, L. M.; Haegel, N. M.; Hansen, W. L.
1983-01-01
A double acceptor binding three holes has been observed for the first time with photoconductive far-infrared spectroscopy in beryllium-doped germanium single crystals. This new center, Be(+), has a hole binding energy of about 5 meV and is only present when free holes are generated by ionization of either neutral shallow acceptors or neutral Be double acceptors. The Be(+) center thermally ionizes above 4 K. It disappears at a uniaxial stress higher than about a billion dyn/sq cm parallel to (111) as a result of the lifting of the valence-band degeneracy.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
NASA Astrophysics Data System (ADS)
Alder, S.; Smith, S. A. F.; Scott, J. M.
2016-10-01
The >200 km long Moonlight Fault Zone (MFZ) in southern New Zealand was an Oligocene basin-bounding normal fault zone that reactivated in the Miocene as a high-angle reverse fault (present dip angle 65°-75°). Regional exhumation in the last c. 5 Ma has resulted in deep exposures of the MFZ that present an opportunity to study the structure and deformation processes that were active in a basin-scale reverse fault at basement depths. Syn-rift sediments are preserved only as thin fault-bound slivers. The hanging wall and footwall of the MFZ are mainly greenschist facies quartzofeldspathic schists that have a steeply-dipping (55°-75°) foliation subparallel to the main fault trace. In more fissile lithologies (e.g. greyschists), hanging-wall deformation occurred by the development of foliation-parallel breccia layers up to a few centimetres thick. Greyschists in the footwall deformed mainly by folding and formation of tabular, foliation-parallel breccias up to 1 m wide. Where the hanging-wall contains more competent lithologies (e.g. greenschist facies metabasite) it is laced with networks of pseudotachylyte that formed parallel to the host rock foliation in a damage zone extending up to 500 m from the main fault trace. The fault core contains an up to 20 m thick sequence of breccias, cataclasites and foliated cataclasites preserving evidence for the progressive development of interconnected networks of (partly authigenic) chlorite and muscovite. Deformation in the fault core occurred by cataclasis of quartz and albite, frictional sliding of chlorite and muscovite grains, and dissolution-precipitation. Combined with published friction and permeability data, our observations suggest that: 1) host rock lithology and anisotropy were the primary controls on the structure of the MFZ at basement depths and 2) high-angle reverse slip was facilitated by the low frictional strength of fault core materials. Restriction of pseudotachylyte networks to the hanging-wall of the MFZ further suggests that the wide, phyllosilicate-rich fault core acted as an efficient hydrological barrier, resulting in a relatively hydrous footwall and fault core but a relatively dry hanging-wall.
Integrated Task and Data Parallel Programming
NASA Technical Reports Server (NTRS)
Grimshaw, A. S.
1998-01-01
This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
Integrated Task And Data Parallel Programming: Language Design
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; West, Emily A.
1998-01-01
his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
The Research of the Parallel Computing Development from the Angle of Cloud Computing
NASA Astrophysics Data System (ADS)
Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun
2017-10-01
Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crichigno, Marcos P.; Shuryak, Edward; Flambaum, Victor V.
2010-10-01
We discuss a new family of multiquanta-bound states in the standard model which exist due to the mutual Higgs-based attraction of the heaviest members of the standard model, namely, gauge quanta W, Z, and (anti)top quarks, t, t. We use a self-consistent mean-field approximation, up to a rather large particle number N. In this paper we do not focus on weakly bound, nonrelativistic bound states, but rather on 'bags' in which the Higgs vacuum expectation value is significantly modified or depleted. The minimal number N above which such states appear strongly depends on the ratio of the Higgs mass tomore » the masses of W, Z, t, t: For a light Higgs mass, m{sub H{approx}}50 GeV, bound states start from N{approx}O(10), but for a ''realistic'' Higgs mass, m{sub H{approx}}100 GeV, one finds metastable/bound W, Z bags only for N{approx}O(1000). We also found that in the latter case pure top bags disappear for all N, although top quarks can still be well bound to the W bags. Anticipating the cosmological applications (discussed in the following Article [Phys. Rev. D 82, 073019]) of these bags as 'doorway states' for baryosynthesis, we also consider here the existence of such metastable bags at finite temperatures, when standard-model parameters such as Higgs, gauge, and top masses are significantly modified.« less
Describing, using 'recognition cones'. [parallel-series model with English-like computer program
NASA Technical Reports Server (NTRS)
Uhr, L.
1973-01-01
A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.
Parallel distributed, reciprocal Monte Carlo radiation in coupled, large eddy combustion simulations
NASA Astrophysics Data System (ADS)
Hunsaker, Isaac L.
Radiation is the dominant mode of heat transfer in high temperature combustion environments. Radiative heat transfer affects the gas and particle phases, including all the associated combustion chemistry. The radiative properties are in turn affected by the turbulent flow field. This bi-directional coupling of radiation turbulence interactions poses a major challenge in creating parallel-capable, high-fidelity combustion simulations. In this work, a new model was developed in which reciprocal monte carlo radiation was coupled with a turbulent, large-eddy simulation combustion model. A technique wherein domain patches are stitched together was implemented to allow for scalable parallelism. The combustion model runs in parallel on a decomposed domain. The radiation model runs in parallel on a recomposed domain. The recomposed domain is stored on each processor after information sharing of the decomposed domain is handled via the message passing interface. Verification and validation testing of the new radiation model were favorable. Strong scaling analyses were performed on the Ember cluster and the Titan cluster for the CPU-radiation model and GPU-radiation model, respectively. The model demonstrated strong scaling to over 1,700 and 16,000 processing cores on Ember and Titan, respectively.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Modelling and simulation of parallel triangular triple quantum dots (TTQD) by using SIMON 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fathany, Maulana Yusuf, E-mail: myfathany@gmail.com; Fuada, Syifaul, E-mail: fsyifaul@gmail.com; Lawu, Braham Lawas, E-mail: bram-labs@rocketmail.com
2016-04-19
This research presents analysis of modeling on Parallel Triple Quantum Dots (TQD) by using SIMON (SIMulation Of Nano-structures). Single Electron Transistor (SET) is used as the basic concept of modeling. We design the structure of Parallel TQD by metal material with triangular geometry model, it is called by Triangular Triple Quantum Dots (TTQD). We simulate it with several scenarios using different parameters; such as different value of capacitance, various gate voltage, and different thermal condition.
Shaw, Sudipta; Lukoyanov, Dmitriy; Danyal, Karamatullah; Dean, Dennis R; Hoffman, Brian M; Seefeldt, Lance C
2014-09-10
Investigations of reduction of nitrite (NO2(-)) to ammonia (NH3) by nitrogenase indicate a limiting stoichiometry, NO2(-) + 6e(-) + 12ATP + 7H(+) → NH3 + 2H2O + 12ADP + 12Pi. Two intermediates freeze-trapped during NO2(-) turnover by nitrogenase variants and investigated by Q-band ENDOR/ESEEM are identical to states, denoted H and I, formed on the pathway of N2 reduction. The proposed NO2(-) reduction intermediate hydroxylamine (NH2OH) is a nitrogenase substrate for which the H and I reduction intermediates also can be trapped. Viewing N2 and NO2(-) reductions in light of their common reduction intermediates and of NO2(-) reduction by multiheme cytochrome c nitrite reductase (ccNIR) leads us to propose that NO2(-) reduction by nitrogenase begins with the generation of NO2H bound to a state in which the active-site FeMo-co (M) has accumulated two [e(-)/H(+)] (E2), stored as a (bridging) hydride and proton. Proton transfer to NO2H and H2O loss leaves M-[NO(+)]; transfer of the E2 hydride to the [NO(+)] directly to form HNO bound to FeMo-co is one of two alternative means for avoiding formation of a terminal M-[NO] thermodynamic "sink". The N2 and NO2(-) reduction pathways converge upon reduction of NH2NH2 and NH2OH bound states to form state H with [-NH2] bound to M. Final reduction converts H to I, with NH3 bound to M. The results presented here, combined with the parallels with ccNIR, support a N2 fixation mechanism in which liberation of the first NH3 occurs upon delivery of five [e(-)/H(+)] to N2, but a total of seven [e(-)/H(+)] to FeMo-co when obligate H2 evolution is considered, and not earlier in the reduction process.
Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Yamaguchi, Yuya
2015-09-01
We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.
Local Rademacher Complexity: sharper risk bounds with and without unlabeled samples.
Oneto, Luca; Ghio, Alessandro; Ridella, Sandro; Anguita, Davide
2015-05-01
We derive in this paper a new Local Rademacher Complexity risk bound on the generalization ability of a model, which is able to take advantage of the availability of unlabeled samples. Moreover, this new bound improves state-of-the-art results even when no unlabeled samples are available. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.
2017-01-01
Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-boundrivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized,three-dimensional, turbulence-resolving model was developed to study the flow structures along lateralseparation zones located in two pools along the Colorado River in Marble Canyon. The model employs thedetached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacingin the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flowmeasurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-pointvalidation using a number of skill metrics, often employed in hydrological research, is proposed here forfluvial modeling. The validation results show predictive capabilities of the DES model. The model reproducesthe pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position ofthe primary and secondary eddy cells, and return current. The lateral recirculation zone is open, havingcontinuous import of fluid upstream of the point of reattachment and export by the recirculation returncurrent downstream of the point of separation. Differences in magnitude and direction of near-bed andnear-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between therecirculation return current and the main flow is dynamic, with large temporal changes in flow direction andmagnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shearlayer becoming three-dimensional without preferred orientation downstream.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
A hybrid parallel framework for the cellular Potts model simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yi; He, Kejing; Dong, Shoubin
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less
Resolvent-based modeling of passive scalar dynamics in wall-bounded turbulence
NASA Astrophysics Data System (ADS)
Dawson, Scott; Saxton-Fox, Theresa; McKeon, Beverley
2017-11-01
The resolvent formulation of the Navier-Stokes equations expresses the system state as the output of a linear (resolvent) operator acting upon a nonlinear forcing. Previous studies have demonstrated that a low-rank approximation of this linear operator predicts many known features of incompressible wall-bounded turbulence. In this work, this resolvent model for wall-bounded turbulence is extended to include a passive scalar field. This formulation allows for a number of additional simplifications that reduce model complexity. Firstly, it is shown that the effect of changing scalar diffusivity can be approximated through a transformation of spatial wavenumbers and temporal frequencies. Secondly, passive scalar dynamics may be studied through the low-rank approximation of a passive scalar resolvent operator, which is decoupled from velocity response modes. Thirdly, this passive scalar resolvent operator is amenable to approximation by semi-analytic methods. We investigate the extent to which this resulting hierarchy of models can describe and predict passive scalar dynamics and statistics in wall-bounded turbulence. The support of AFOSR under Grant Numbers FA9550-16-1-0232 and FA9550-16-1-0361 is gratefully acknowledged.
A Parallel Saturation Algorithm on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
Lower Bounds to the Reliabilities of Factor Score Estimators.
Hessen, David J
2016-10-06
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Mirror energy difference and the structure of loosely bound proton-rich nuclei around A =20
NASA Astrophysics Data System (ADS)
Yuan, Cenxi; Qi, Chong; Xu, Furong; Suzuki, Toshio; Otsuka, Takaharu
2014-04-01
The properties of loosely bound proton-rich nuclei around A =20 are investigated within the framework of the nuclear shell model. In these nuclei, the strength of the effective interactions involving the loosely bound proton s1/2 orbit is significantly reduced in comparison with that of those in their mirror nuclei. We evaluate the reduction of the effective interaction by calculating the monopole-based-universal interaction (VMU) in the Woods-Saxon basis. The shell-model Hamiltonian in the sd shell, such as USD, can thus be modified to reproduce the binding energies and energy levels of the weakly bound proton-rich nuclei around A =20. The effect of the reduction of the effective interaction on the structure and decay properties of these nuclei is also discussed.
Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction
NASA Astrophysics Data System (ADS)
Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi
2017-06-01
We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.
Continuous Opinion Dynamics Under Bounded Confidence:. a Survey
NASA Astrophysics Data System (ADS)
Lorenz, Jan
Models of continuous opinion dynamics under bounded confidence have been presented independently by Krause and Hegselmann and by Deffuant et al. in 2000. They have raised a fair amount of attention in the communities of social simulation, sociophysics and complexity science. The researchers working on it come from disciplines such as physics, mathematics, computer science, social psychology and philosophy. In these models agents hold continuous opinions which they can gradually adjust if they hear the opinions of others. The idea of bounded confidence is that agents only interact if they are close in opinion to each other. Usually, the models are analyzed with agent-based simulations in a Monte Carlo style, but they can also be reformulated on the agent's density in the opinion space in a master equation style. The contribution of this survey is fourfold. First, it will present the agent-based and density-based modeling frameworks including the cases of multidimensional opinions and heterogeneous bounds of confidence. Second, it will give the bifurcation diagrams of cluster configuration in the homogeneous model with uniformly distributed initial opinions. Third, it will review the several extensions and the evolving phenomena which have been studied so far, and fourth it will state some open questions.
Platelet binding sites for factor VIII in relation to fibrin and phosphatidylserine
Novakovic, Valerie A.; Shi, Jialan; Rasmussen, Jan; Pipe, Steven W.
2015-01-01
Thrombin-stimulated platelets expose very little phosphatidylserine (PS) but express binding sites for factor VIII (fVIII), casting doubt on the role of exposed PS as the determinant of binding sites. We previously reported that fVIII binding sites are increased three- to sixfold when soluble fibrin (SF) binds the αIIbβ3 integrin. This study focuses on the hypothesis that platelet-bound SF is the major source of fVIII binding sites. Less than 10% of fVIII was displaced from thrombin-stimulated platelets by lactadherin, a PS-binding protein, and an fVIII mutant defective in PS-dependent binding retained platelet affinity. Therefore, PS is not the determinant of most binding sites. FVIII bound immobilized SF and paralleled platelet binding in affinity, dependence on separation from von Willebrand factor, and mediation by the C2 domain. SF also enhanced activity of fVIII in the factor Xase complex by two- to fourfold. Monoclonal antibody (mAb) ESH8, against the fVIII C2 domain, inhibited binding of fVIII to SF and platelets but not to PS-containing vesicles. Similarly, mAb ESH4 against the C2 domain, inhibited >90% of platelet-dependent fVIII activity vs 35% of vesicle-supported activity. These results imply that platelet-bound SF is a component of functional fVIII binding sites. PMID:26162408
Falconer, I R
1976-03-01
To examine the relationship between the functioning of the adrenal and thyroid glands in sheep, plasma cortisol concentration, concentration of protein-bound 125I from thyroid vein plasma, heart rate and blood pressure were measured in ewes bearing exteriorized thyroid glands. During these measurements stresses were imposed on the animals: fear induced by pistol shots or by a barking dog, cold by cooling and wetting, and physical restraint by a loose harness. Increases in plasma cortisol concentration of 2-6 mug/100 ml were observed with each type of stressor, the response rapidly decreasing with habituation of the animal. Increases in the concentration of protein-bound 125I from thyroid vein plasma were also observed repeatedly during cooling and wetting, occasionally after the introduction of a barking dog, and during continued restraint. Cooling and wetting was the only stress causing consistent parallel activation of the adrenal cortex and thyroid gland; the other stressors resulted in independent fluctuations of secretions, as indicated by plasma cortisol concentration and concentration of protein-bound 125I from thyroid vein plasma. No reciprocal relationship between thyroid gland and adrenal cortex activity was detected. It was concluded taht these ewes, which had been accustomed to normal experimental procedures for a period of 2 years, demonstrated functional independence of thyroid and adrenal cortical secretions when subjected to stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeiss, C.R.; Levitz, D.; Suszko, I.M.
1978-08-01
IgE antibody specific for multiple allergens extracted from grass and ragweed pollens was measured by radioimmunoassay. The assay depends on the interaction between IgE antibody bound to a polystyrene solid phase, /sup 125/I-labeled grass allergens (GA), and ragweed allergens (RW). The binding of /sup 125/I RW by serum IgE antibody from 37 allergic patients ranged from 0.2 ng to 75 ng RW protein (P) bound per ml. This binding of /sup 125/I RW by patient's IgE was paralleled by their IgE binding of /sup 125/I antigen E (AgE), a purified allergen from ragweed pollen (r = 0.90, p less thanmore » 0.001). Inhibition of patient's IgE binding of /sup 125/I RW by highly purified AgE ranged from 25 to 85% indicated individual differences in patient's IgE response to inhaled ragweed pollen. The binding of /sup 125/I GA by serum IgE antibody from 7 grass-sensitive patients ranged from 0.6 ng GA P bound per ml to 15 ng. This assay should be useful in the study of IgE responses to environmental agents containing multiple allergens and has the advantage that other antibody classes cannot interfere with the interaction between IgE antibody and labeled allergens.« less
Robust inference in the negative binomial regression model with an application to falls data.
Aeberhard, William H; Cantoni, Eva; Heritier, Stephane
2014-12-01
A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.
Coefficient of performance and its bounds with the figure of merit for a general refrigerator
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Wei
2015-02-01
A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.
The metamorphosis of 'culture-bound' syndromes.
Jilek, W G; Jilek-Aall, L
1985-01-01
Starting from a critical review of the concept of 'culture-bound' disorders and its development in comparative psychiatry, the authors present the changing aspects of two so-called culture-bound syndromes as paradigms of transcultural metamorphosis (koro) and intra-cultural metamorphosis (Salish Indian spirit sickness), respectively. The authors present recent data on epidemics of koro, which is supposedly bound to Chinese culture, in Thailand and India among non-Chinese populations. Neither the model of Oedipal castration anxiety nor the model of culture-specific pathogenicity, commonly adduced in psychiatric and ethnological literature, explain these phenomena. The authors' data on Salish Indian spirit sickness describes the contemporary condition as anomic depression, which is significantly different from its traditional namesake. The traditional concept was redefined by Salish ritual specialists in response to current needs imposed by social changes. The stresses involved in creating the contemporary phenomena of koro and spirit sickness are neither culture-specific nor culture-inherent, as postulated for 'culture-bound' syndromes, rather they are generated by a feeling of powerlessness caused by perceived threats to ethnic survival.
STATIC QUARK ANTI-QUARK FREE AND INTERNAL ENERGY IN 2-FLAVOR QCD AND BOUND STATES IN THE QGP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ZANTOW, F.; KACZMAREK, O.
2005-07-25
We present results on heavy quark free energies in 2-flavour QCD. The temperature dependence of the interaction between static quark anti-quark pairs will be analyzed in terms of temperature dependent screening radii, which give a first estimate on the medium modification of (heavy quark) bound states in the quark gluon plasma. Comparing those radii to the (zero temperature) mean squared charge radii of chasmonium states indicates that the J/{Psi} may survive the phase transition as a bound state, while {chi}{sub c} and {Psi}{prime} are expected to show significant thermal modifications at temperatures close to the transition. Furthermore we will analyzemore » the relation between heavy quark free energies, entropy contributions and internal energy and discuss their relation to potential models used to analyze the melting of heavy quark bound states above the deconfinement temperature. Results of different groups and various potential models for bound states in the deconfined phase of QCD are compared.« less
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Research on Parallel Three Phase PWM Converters base on RTDS
NASA Astrophysics Data System (ADS)
Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun
2018-01-01
Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.
Lineation-parallel c-axis Fabric of Quartz Formed Under Water-rich Conditions
NASA Astrophysics Data System (ADS)
Wang, Y.; Zhang, J.; Li, P.
2014-12-01
The crystallographic preferred orientation (CPO) of quartz is of great significance because it records much valuable information pertinent to the deformation of quartz-rich rocks in the continental crust. The lineation-parallel c-axis CPO (i.e., c-axis forming a maximum parallel to the lineation) in naturally deformed quartz is generally considered to form under high temperature (> ~550 ºC) conditions. However, most laboratory deformation experiments on quartzite failed to produce such a CPO at high temperatures up to 1200 ºC. Here we reported a new occurrence of the lineation-parallel c-axis CPO of quartz from kyanite-quartz veins in eclogite. Optical microstructural observations, fourier transform infrared (FTIR) and electron backscattered diffraction (EBSD) techniques were integrated to illuminate the nature of quartz CPOs. Quartz exhibits mostly straight to slightly curved grain boundaries, modest intracrystalline plasticity, and significant shape preferred orientation (SPO) and CPOs, indicating dislocation creep dominated the deformation of quartz. Kyanite grains in the veins are mostly strain-free, suggestive of their higher strength than quartz. The pronounced SPO and CPOs in kyanite were interpreted to originate from anisotropic crystal growth and/or mechanical rotation during vein-parallel shearing. FTIR results show quartz contains a trivial amount of structurally bound water (several tens of H/106 Si), while kyanite has a water content of 384-729 H/106 Si; however, petrographic observations suggest quartz from the veins were practically deformed under water-rich conditions. We argue that the observed lineation-parallel c-axis fabric in quartz was inherited from preexisting CPOs as a result of anisotropic grain growth under stress facilitated by water, but rather than due to a dominant c-slip. The preservation of the quartz CPOs probably benefited from the preexisting quartz CPOs which renders most quartz grains unsuitably oriented for an easy a-slip at lower temperatures and the weak deformation during subsequent exhumation. This hypothesis provides a reasonable explanation for the observations that most lineation-parallel c-axis fabrics of quartz were found in veins and that deformation experiments on quartz-rich rocks at high temperature failed to produce such CPOs.
Capabilities of Fully Parallelized MHD Stability Code MARS
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2016-10-01
Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.
Fully Parallel MHD Stability Analysis Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2015-11-01
Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Boyko, Oleksiy; Zheleznyak, Mark
2015-04-01
The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.
Dynamic modeling of parallel robots for computed-torque control implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codourey, A.
1998-12-01
In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less
Classical Physics and the Bounds of Quantum Correlations.
Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán
2016-06-24
A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.
Tomasiak, Thomas M.; Archuleta, Tara L.; Andréll, Juni; Luna-Chávez, César; Davis, Tyler A.; Sarwar, Maruf; Ham, Amy J.; McDonald, W. Hayes; Yankovskaya, Victoria; Stern, Harry A.; Johnston, Jeffrey N.; Maklashina, Elena; Cecchini, Gary; Iverson, Tina M.
2011-01-01
Complex II superfamily members catalyze the kinetically difficult interconversion of succinate and fumarate. Due to the relative simplicity of complex II substrates and their similarity to other biologically abundant small molecules, substrate specificity presents a challenge in this system. In order to identify determinants for on-pathway catalysis, off-pathway catalysis, and enzyme inhibition, crystal structures of Escherichia coli menaquinol:fumarate reductase (QFR), a complex II superfamily member, were determined bound to the substrate, fumarate, and the inhibitors oxaloacetate, glutarate, and 3-nitropropionate. Optical difference spectroscopy and computational modeling support a model where QFR twists the dicarboxylate, activating it for catalysis. Orientation of the C2–C3 double bond of activated fumarate parallel to the C(4a)–N5 bond of FAD allows orbital overlap between the substrate and the cofactor, priming the substrate for nucleophilic attack. Off-pathway catalysis, such as the conversion of malate to oxaloacetate or the activation of the toxin 3-nitropropionate may occur when inhibitors bind with a similarly activated bond in the same position. Conversely, inhibitors that do not orient an activatable bond in this manner, such as glutarate and citrate, are excluded from catalysis and act as inhibitors of substrate binding. These results support a model where electronic interactions via geometric constraint and orbital steering underlie catalysis by QFR. PMID:21098488
Membrane Perturbation Induced by Interfacially Adsorbed Peptides
Zemel, Assaf; Ben-Shaul, Avinoam; May, Sylvio
2004-01-01
The structural and energetic characteristics of the interaction between interfacially adsorbed (partially inserted) α-helical, amphipathic peptides and the lipid bilayer substrate are studied using a molecular level theory of lipid chain packing in membranes. The peptides are modeled as “amphipathic cylinders” characterized by a well-defined polar angle. Assuming two-dimensional nematic order of the adsorbed peptides, the membrane perturbation free energy is evaluated using a cell-like model; the peptide axes are parallel to the membrane plane. The elastic and interfacial contributions to the perturbation free energy of the “peptide-dressed” membrane are evaluated as a function of: the peptide penetration depth into the bilayer's hydrophobic core, the membrane thickness, the polar angle, and the lipid/peptide ratio. The structural properties calculated include the shape and extent of the distorted (stretched and bent) lipid chains surrounding the adsorbed peptide, and their orientational (C-H) bond order parameter profiles. The changes in bond order parameters attendant upon peptide adsorption are in good agreement with magnetic resonance measurements. Also consistent with experiment, our model predicts that peptide adsorption results in membrane thinning. Our calculations reveal pronounced, membrane-mediated, attractive interactions between the adsorbed peptides, suggesting a possible mechanism for lateral aggregation of membrane-bound peptides. As a special case of interest, we have also investigated completely hydrophobic peptides, for which we find a strong energetic preference for the transmembrane (inserted) orientation over the horizontal (adsorbed) orientation. PMID:15189858
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomasiak, Thomas M.; Archuleta, Tara L.; Andréll, Juni
2012-01-05
Complex II superfamily members catalyze the kinetically difficult interconversion of succinate and fumarate. Due to the relative simplicity of complex II substrates and their similarity to other biologically abundant small molecules, substrate specificity presents a challenge in this system. In order to identify determinants for on-pathway catalysis, off-pathway catalysis, and enzyme inhibition, crystal structures of Escherichia coli menaquinol:fumarate reductase (QFR), a complex II superfamily member, were determined bound to the substrate, fumarate, and the inhibitors oxaloacetate, glutarate, and 3-nitropropionate. Optical difference spectroscopy and computational modeling support a model where QFR twists the dicarboxylate, activating it for catalysis. Orientation of themore » C2-C3 double bond of activated fumarate parallel to the C(4a)-N5 bond of FAD allows orbital overlap between the substrate and the cofactor, priming the substrate for nucleophilic attack. Off-pathway catalysis, such as the conversion of malate to oxaloacetate or the activation of the toxin 3-nitropropionate may occur when inhibitors bind with a similarly activated bond in the same position. Conversely, inhibitors that do not orient an activatable bond in this manner, such as glutarate and citrate, are excluded from catalysis and act as inhibitors of substrate binding. These results support a model where electronic interactions via geometric constraint and orbital steering underlie catalysis by QFR.« less
A communication library for the parallelization of air quality models on structured grids
NASA Astrophysics Data System (ADS)
Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian
PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.
The Problem of Limited Inter-rater Agreement in Modelling Music Similarity
Flexer, Arthur; Grill, Thomas
2016-01-01
One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932
Fragment-based modelling of single stranded RNA bound to RNA recognition motif containing proteins
de Beauchene, Isaure Chauvot; de Vries, Sjoerd J.; Zacharias, Martin
2016-01-01
Abstract Protein-RNA complexes are important for many biological processes. However, structural modeling of such complexes is hampered by the high flexibility of RNA. Particularly challenging is the docking of single-stranded RNA (ssRNA). We have developed a fragment-based approach to model the structure of ssRNA bound to a protein, based on only the protein structure, the RNA sequence and conserved contacts. The conformational diversity of each RNA fragment is sampled by an exhaustive library of trinucleotides extracted from all known experimental protein–RNA complexes. The method was applied to ssRNA with up to 12 nucleotides which bind to dimers of the RNA recognition motifs (RRMs), a highly abundant eukaryotic RNA-binding domain. The fragment based docking allows a precise de novo atomic modeling of protein-bound ssRNA chains. On a benchmark of seven experimental ssRNA–RRM complexes, near-native models (with a mean heavy-atom deviation of <3 Å from experiment) were generated for six out of seven bound RNA chains, and even more precise models (deviation < 2 Å) were obtained for five out of seven cases, a significant improvement compared to the state of the art. The method is not restricted to RRMs but was also successfully applied to Pumilio RNA binding proteins. PMID:27131381
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
Rotational relaxation of molecular hydrogen at moderate temperatures
NASA Technical Reports Server (NTRS)
Sharma, S. P.
1994-01-01
Using a coupled rotation-vibration-dissociation model the rotational relaxation times for molecular hydrogen as a function of final temperature (500-5000 K), in a hypothetical scenario of sudden compression, are computed. The theoretical model is based on a master equation solver. The bound-bound and bound-free transition rates have been computed using a quasiclassical trajectory method. A review of the available experimental data on the rotational relaxation of hydrogen is presented, with a critical overview of the method of measurements and data reduction, including the sources of errors. These experimental data are then compared with the computed results.
Thomson scattering in the average-atom approximation.
Johnson, W R; Nilsen, J; Cheng, K T
2012-09-01
The average-atom model is applied to study Thomson scattering of x-rays from warm dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave functions, and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Applications are given to dense hydrogen, beryllium, aluminum, and titanium plasmas. In the case of titanium, bound states are predicted to modify the spectrum significantly.
Loss of the Endothelial Glycocalyx Links Albuminuria and Vascular Dysfunction
Ferguson, Joanne K.; Burford, James L.; Gevorgyan, Haykanush; Nakano, Daisuke; Harper, Steven J.; Bates, David O.; Peti-Peterdi, Janos
2012-01-01
Patients with albuminuria and CKD frequently have vascular dysfunction but the underlying mechanisms remain unclear. Because the endothelial surface layer, a meshwork of surface-bound and loosely adherent glycosaminoglycans and proteoglycans, modulates vascular function, its loss could contribute to both renal and systemic vascular dysfunction in proteinuric CKD. Using Munich-Wistar-Fromter (MWF) rats as a model of spontaneous albuminuric CKD, multiphoton fluorescence imaging and single-vessel physiology measurements revealed that old MWF rats exhibited widespread loss of the endothelial surface layer in parallel with defects in microvascular permeability to both water and albumin, in both continuous mesenteric microvessels and fenestrated glomerular microvessels. In contrast to young MWF rats, enzymatic disruption of the endothelial surface layer in old MWF rats resulted in neither additional loss of the layer nor additional changes in permeability. Intravenous injection of wheat germ agglutinin lectin and its adsorption onto the endothelial surface layer significantly improved glomerular albumin permeability. Taken together, these results suggest that widespread loss of the endothelial surface layer links albuminuric kidney disease with systemic vascular dysfunction, providing a potential therapeutic target for proteinuric kidney disease. PMID:22797190
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.
In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less
Mills, Deryck J; Vitt, Stella; Strauss, Mike; Shima, Seigo; Vonck, Janet
2013-01-01
Methanogenic archaea use a [NiFe]-hydrogenase, Frh, for oxidation/reduction of F420, an important hydride carrier in the methanogenesis pathway from H2 and CO2. Frh accounts for about 1% of the cytoplasmic protein and forms a huge complex consisting of FrhABG heterotrimers with each a [NiFe] center, four Fe-S clusters and an FAD. Here, we report the structure determined by near-atomic resolution cryo-EM of Frh with and without bound substrate F420. The polypeptide chains of FrhB, for which there was no homolog, was traced de novo from the EM map. The 1.2-MDa complex contains 12 copies of the heterotrimer, which unexpectedly form a spherical protein shell with a hollow core. The cryo-EM map reveals strong electron density of the chains of metal clusters running parallel to the protein shell, and the F420-binding site is located at the end of the chain near the outside of the spherical structure. DOI: http://dx.doi.org/10.7554/eLife.00218.001 PMID:23483797
Tomographic PIV of flow through ordered thin porous media
NASA Astrophysics Data System (ADS)
Larsson, I. A. Sofia; Lundström, T. Staffan; Lycksam, Henrik
2018-06-01
Pressure-driven flow in a model of a thin porous medium is investigated using tomographic particle image velocimetry. The solid parts of the porous medium have the shape of vertical cylinders placed on equal interspatial distance from each other. The array of cylinders is confined between two parallel plates, meaning that the permeability is a function of the diameter and height of the cylinders, as well as their interspatial distance. Refractive index matching is applied to enable measurements without optical distortion and a dummy cell is used for the calibration of the measurements. The results reveal that the averaged flow field changes substantially as Reynolds number increases, and that the wakes formed downstream the cylinders contain complex, three-dimensional vortex structures hard to visualize with only planar measurements. An interesting observation is that the time-averaged velocity maximum changes position as Reynolds number increases. For low Reynolds number flow, the maximum is in the middle of the channel, while, for the higher Reynolds numbers investigated, two maxima appear closer to each bounding lower and upper wall.
Tension-induced binding of semiflexible biopolymers
NASA Astrophysics Data System (ADS)
Benetatos, Panayotis; von der Heydt, Alice; Zippelius, Annette
2015-03-01
We investigate theoretically the effect of polymer tension on the collective behaviour of reversible cross-links. We use a model of two parallel-aligned, weakly-bending wormlike chains with a regularly spaced sequence of binding sites subjected to a tensile force. Reversible cross-links attach and detach at the binding sites with an affinity controlled by a chemical potential. In a mean-field approach, we calculate the free energy of the system and we show the emergence of a free energy barrier which controls the reversible (un)binding. The tension affects the conformational entropy of the chains which competes with the binding energy of the cross-links. This competition gives rise to a sudden increase in the fraction of bound sites as the polymer tension increases. The force-induced first-order transition in the number of cross-links implies a sudden force-induced stiffening of the effective stretching modulus of the polymers. This mechanism may be relevant to the formation and stress-induced strengthening of stress fibers in the cytoskeleton. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) via grant SFB-937/A1.
Elemans, Coen P H; Muller, Mees; Larsen, Ole Naesbye; van Leeuwen, Johan L
2009-04-01
Birdsong has developed into one of the important models for motor control of learned behaviour and shows many parallels with speech acquisition in humans. However, there are several experimental limitations to studying the vocal organ - the syrinx - in vivo. The multidisciplinary approach of combining experimental data and mathematical modelling has greatly improved the understanding of neural control and peripheral motor dynamics of sound generation in birds. Here, we present a simple mechanical model of the syrinx that facilitates detailed study of vibrations and sound production. Our model resembles the 'starling resistor', a collapsible tube model, and consists of a tube with a single membrane in its casing, suspended in an external pressure chamber and driven by various pressure patterns. With this design, we can separately control 'bronchial' pressure and tension in the oscillating membrane and generate a wide variety of 'syllables' with simple sweeps of the control parameters. We show that the membrane exhibits high frequency, self-sustained oscillations in the audio range (>600 Hz fundamental frequency) using laser Doppler vibrometry, and systematically explore the conditions for sound production of the model in its control space. The fundamental frequency of the sound increases with tension in three membranes with different stiffness and mass. The lower-bound fundamental frequency increases with membrane mass. The membrane vibrations are strongly coupled to the resonance properties of the distal tube, most likely because of its reflective properties to sound waves. Our model is a gross simplification of the complex morphology found in birds, and more closely resembles mathematical models of the syrinx. Our results confirm several assumptions underlying existing mathematical models in a complex geometry.
Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers
ERIC Educational Resources Information Center
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-01-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Toraya, Shuichi; Javkhlantugs, Namsrai; Mishima, Daisuke; Nishimura, Katsuyuki; Ueda, Kazuyoshi; Naito, Akira
2010-01-01
Bombolitin II (BLT2) is one of the hemolytic heptadecapeptides originally isolated from the venom of a bumblebee. Structure and orientation of BLT2 bound to 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) membranes were determined by solid-state 31P and 13C NMR spectroscopy. 31P NMR spectra showed that BLT2-DPPC membranes were disrupted into small particles below the gel-to-liquid crystalline phase transition temperature (Tc) and fused to form a magnetically oriented vesicle system where the membrane surface is parallel to the magnetic fields above the Tc. 13C NMR spectra of site-specifically 13C-labeled BLT2 at the carbonyl carbons were observed and the chemical shift anisotropies were analyzed to determine the dynamic structure of BLT2 bound to the magnetically oriented vesicle system. It was revealed that the membrane-bound BLT2 adopted an α-helical structure, rotating around the membrane normal with the tilt angle of the helical axis at 33°. Interatomic distances obtained from rotational-echo double-resonance experiments further showed that BLT2 adopted a straight α-helical structure. Molecular dynamics simulation performed in the BLT2-DPPC membrane system showed that the BLT2 formed a straight α-helix and that the C-terminus was inserted into the membrane. The α-helical axis is tilted 30° to the membrane normal, which is almost the same as the value obtained from solid-state NMR. These results suggest that the membrane disruption induced by BLT2 is attributed to insertion of BLT2 into the lipid bilayers. PMID:21081076
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
Search asymmetries: parallel processing of uncertain sensory information.
Vincent, Benjamin T
2011-08-01
What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.
Eivazian Kary, Naser; Alizadeh, Zhila
2017-05-01
Beauveria bassiana is a fungus which is widely used as a biological insecticide to control a number of economically important insect pests. Knowledge of the genetic diversity of the isolates, understanding the underlying nature of these evolutionary phenomena and finding appropriate and simple screening tools play an important role in developing effective biocontrol agents. Here, we monitored changes of electrophoretic karyotype of small molecules of extrachromosomal DNAs, presumably mitochondrial DNA or plasmids in several individual isolates of B. bassiana during the forced in vitro evolution by continual subculture on artificial media and then we evaluated these changes on the virulence of the isolates. Through agarose gel electrophoresis of the small extrachromosomal DNAs molecules, we found that mutations accumulate quickly and obvious changes take place in extrachromosomal DNAs of some isolates, although this did not always occur. This plasticity in response to culturing pressure suggests that buffering capacity of fungal genome against mutations is isolate dependent. Following the forced evolution by sub-culturing, five discriminable electrophoretic karyotype of extrachromosomal DNAs was observed among isolates. The results showed that some isolates are prone to deep mutations, but during enforced sub-culturing some others have efficiently conserved genome. These differences are influensive in screening appropriate isolates for mass production as a keystone in biocontrol program. To determine the effects of these changes on isolate traits, virulence, germination rate and spore-bound Pr1 activity were assessed parallel to sub-culturing. The results clearly revealed that parallel to sub-culturing and in correlation with karyotypic changes, isolates significantly suffered from virulence, germination rate and spore-bound Pr1 activity deficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.
No spreading across the southern Juan de Fuca ridge axial cleft during 1994-1996
Chadwell, C.D.; Hildebrand, J.A.; Spiess, Fred N.; Morton, J.L.; Normark, W.R.; Reiss, C.A.
1999-01-01
Direct-path acoustic measurements between seafloor transponders observed no significant extension (-10 ?? 14 mm/yr) from August 1994 to September 1996 at the southern Juan de Fuca Ridge (44??40' N and 130??20' W). The acoustic path for the measurement is a 691-m baseline straddling the axial cleft, which bounds the Pacific and Juan de Fuca plates. Given an expected full-spreading rate of 56 mm/yr, these data suggest that extension across this plate boundary occurs episodically within the narrow (~1 km) region of the axial valley floor, and that active deformation is occurring between the axial cleft and the plate interior. A cleft-parallel 714-m baseline located 300 m to the west of the cleft on the Pacific plate monitored system performance and, as expected, observed no motion (+5??7 mm/yr) between the 1994 and 1996 surveys.Direct-path acoustic measurements between seafloor transponders observed no significant extension (-10 ?? 14 mm/yr) from August 1994 to September 1996 at the southern Juan de Fuca Ridge (44??40 minutes N and 130??20 minutes W). The acoustic path for the measurement is a 691-m baseline straddling the axial cleft, which bounds the Pacific and Juan de Fuca plates. Given an expected full-spreading rate of 56 mm/yr, these data suggest that extension across this plate boundary occurs episodically within the narrow (approx. 1 km) region of the axial valley floor, and that active deformation is occurring between the axial cleft and the plate interior. A cleft-parallel 714-m baseline located 300 m to the west of the cleft on the Pacific plate monitored system performance and, as expected, observed no motion (+5 ?? 7 mm/yr) between the 1994 and 1996 surveys.
ERIC Educational Resources Information Center
Han, Hyojung; Rojewski, Jay W.
2015-01-01
A Korean national database, the High School Graduates Occupational Mobility Survey, was used to examine the influence of perceived social supports (family and school) and career adaptability on the subsequent job satisfaction of work-bound adolescents 4 months after their transition from high school to work. Structural equation modeling analysis…
Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model
NASA Astrophysics Data System (ADS)
Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.
2017-10-01
Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
NASA Astrophysics Data System (ADS)
Janecke, S. U.; Markowski, D.
2015-12-01
The overdue earthquake on the Coachella section, San Andreas fault (SAF), the model ShakeOut earthquake, and the conflict between cross-fault models involving the Extra fault array and mapped shortening in the Durmid Hill area motivate new analyses at the southern SAF tip. Geologic mapping, LiDAR, seismic reflection, magnetic and gravity datasets, and aerial photography confirm the existence of the East Shoreline strand (ESS) of the SAF southwest of the main trace of the SAF. We mapped the 15 km long ESS, in a band northeast side of the Salton Sea. Other data suggest that the ESS continues N to the latitude of the Mecca Hills, and is >35 km long. The ESS cuts and folds upper Holocene beds and appears to creep, based on discovery of large NW-striking cracks in modern beach deposits. The two traces of the SAF are parallel and ~0.5 to ~2.5 km apart. Groups of east, SE, and ENE-striking strike-slip cross-faults connect the master dextral faults of the SAF. There are few sinistral-normal faults that could be part of the Extra fault array. The 1-km wide ESS contains short, discontinuous traces of NW-striking dextral-oblique faults. These en-echelon faults bound steeply dipping Pleistocene beds, cut out section, parallel tight NW-trending folds, and produced growth folds. Beds commonly dip toward the ESS on both sides, in accord with persistent NE-SW shortening across the ESS. The dispersed fault-fold structural style of the ESS is due to decollements in faulted mud-rich Pliocene to Holocene sediment and ramps and flats along the strike-slip faults. A sheared ladder-like geometric model of the two master dextral strands of the SAF and their intervening cross-faults, best explains the field relationships and geophysical datasets. Contraction across >40 km2 of the southernmost SAF zone in the Durmid Hills suggest that interaction of active structures in the SAF zone may inhibit the nucleation of large earthquakes in this region. The ESS may cross the northern Coachella Valley to join the blind Palm Spring dextral fault- a source of microearthquakes and differential subsidence. The ESS may also continue north parallel to the margin of the Salton Trough or have both a NW and NE branch. The risk of a future large earthquake directly beneath the greater Palm Springs metropolitan area may be larger if the first or last options are correct.
NASA Astrophysics Data System (ADS)
Usher, P. D.
1997-12-01
William Shakespeare's Hamlet has much evidence to suggest that the Bard was aware of the cosmological models of his time, specifically the geocentric bounded Ptolemaic and Tychonic models, and the infinite Diggesian. Moreover, Shakespeare describes how the Ptolemaic model is to be transformed to the Diggesian. Hamlet's "transformation" is the reason that Claudius, who personifies the Ptolemaic model, summons Rosencrantz and Guildenstern, who personify the Tychonic. Pantometria, written by Leonard Digges and his son Thomas in 1571, contains the first technical use of the word "transformation." At age thirty, Thomas Digges went on to propose his Perfit Description, as alluded to in Act Five where Hamlet's age is given as thirty. In Act Five as well, the words "bore" and "arms" refer to Thomas' vocation as muster-master and his scientific interest in ballistics. England's leading astronomer was also the father of the poet whose encomium introduced the First Folio of 1623. His oldest child Dudley became a member of the Virginia Company and facilitated the writing of The Tempest. Taken as a whole, such manifold connections to Thomas Digges support Hotson's contention that Shakespeare knew the Digges family. Rosencrantz and Guildenstern in Hamlet bear Danish names because they personify the Danish model, while the king's name is latinized like that of Claudius Ptolemaeus. The reason Shakespeare anglicized "Amleth" to "Hamlet" was because he saw a parallel between Book Three of Saxo Grammaticus and the eventual triumph of the Diggesian model. But Shakespeare eschewed Book Four, creating this particular ending from an infinity of other possibilities because it "suited his purpose," viz. to celebrate the concept of a boundless universe of stars like the Sun.