NASA Astrophysics Data System (ADS)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2018-05-01
As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruitt, Spencer R.; Nakata, Hiroya; Nagata, Takeshi
2016-04-12
The analytic first derivative with respect to nuclear coordinates is formulated and implemented in the framework of the three-body fragment molecular orbital (FMO) method. The gradient has been derived and implemented for restricted Hartree-Fock, second-order Møller-Plesset perturbation, and density functional theories. The importance of the three-body fully analytic gradient is illustrated through the failure of the two-body FMO method during molecular dynamics simulations of a small water cluster. The parallel implementation of the fragment molecular orbital method, its parallel efficiency, and its scalability on the Blue Gene/Q architecture up to 262,144 CPU cores, are also discussed.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python
Laura, Jason R.; Rey, Sergio J.
2017-01-01
Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
Vectorization and parallelization of the finite strip method for dynamic Mindlin plate problems
NASA Technical Reports Server (NTRS)
Chen, Hsin-Chu; He, Ai-Fang
1993-01-01
The finite strip method is a semi-analytical finite element process which allows for a discrete analysis of certain types of physical problems by discretizing the domain of the problem into finite strips. This method decomposes a single large problem into m smaller independent subproblems when m harmonic functions are employed, thus yielding natural parallelism at a very high level. In this paper we address vectorization and parallelization strategies for the dynamic analysis of simply-supported Mindlin plate bending problems and show how to prevent potential conflicts in memory access during the assemblage process. The vector and parallel implementations of this method and the performance results of a test problem under scalar, vector, and vector-concurrent execution modes on the Alliant FX/80 are also presented.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Static and dynamic characteristics of parallel-grooved seals
NASA Technical Reports Server (NTRS)
Iwatsubo, Takuzo; Yang, Bo-Suk; Ibaraki, Ryuji
1987-01-01
Presented is an analytical method to determine static and dynamic characteristics of annular parallel-grooved seals. The governing equations were derived by using the turbulent lubrication theory based on the law of fluid friction. Linear zero- and first-order perturbation equations of the governing equations were developed, and these equations were analytically investigated to obtain the reaction force of the seals. An analysis is presented that calculates the leakage flow rate, the torque loss, and the rotordynamic coefficients for parallel-grooved seals. To demonstrate this analysis, we show the effect of changing number of stages, land and groove width, and inlet swirl on stability of the boiler feed water pump seals. Generally, as the number of stages increased or the grooves became wider, the leakage flow rate and rotor-dynamic coefficients decreased and the torque loss increased.
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Zhenwei; Yang, Weihong, E-mail: whyang@ustc.edu.cn
By using analytical method, the exact solutions of the incompressible dissipative Hall magnetohydrodynamics (MHD) equations are derived. It is found that a phase difference may occur between the velocity and magnetic field fluctuations when the kinetic and magnetic Reynolds numbers are both very large. Since velocity and magnetic field fluctuations are both circular polarized, the phase difference makes them no longer parallel or anti-parallel like that in the incompressible ideal Hall MHD.
NASA Astrophysics Data System (ADS)
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Kinematics and dynamics of robotic systems with multiple closed loops
NASA Astrophysics Data System (ADS)
Zhang, Chang-De
The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.
Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping
2015-01-01
Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253
NASA Astrophysics Data System (ADS)
Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.
2017-12-01
Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.
NASA Astrophysics Data System (ADS)
Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.
2018-06-01
Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†
Khan, Md. Ashfaquzzaman; Herbordt, Martin C.
2011-01-01
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327
Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.
Khan, Md Ashfaquzzaman; Herbordt, Martin C
2011-07-20
Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-05
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Bridges, Thomas J.
2010-06-01
Numerical integration of complex linear systems of ODEs depending analytically on an eigenvalue parameter are considered. Complex orthogonalization, which is required to stabilize the numerical integration, results in non-analytic systems. It is shown that properties of eigenvalues are still efficiently recoverable by extracting information from a non-analytic characteristic function. The orthonormal systems are constructed using the geometry of Stiefel bundles. Different forms of continuous orthogonalization in the literature are shown to correspond to different choices of connection one-form on the Stiefel bundle. For the numerical integration, Gauss-Legendre Runge-Kutta algorithms are the principal choice for preserving orthogonality, and performance results are shown for a range of GLRK methods. The theory and methods are tested by application to example boundary value problems including the Orr-Sommerfeld equation in hydrodynamic stability.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Aquilante, Francesco; Autschbach, Jochen; Carlson, Rebecca K; Chibotaru, Liviu F; Delcey, Mickaël G; De Vico, Luca; Fdez Galván, Ignacio; Ferré, Nicolas; Frutos, Luis Manuel; Gagliardi, Laura; Garavelli, Marco; Giussani, Angelo; Hoyer, Chad E; Li Manni, Giovanni; Lischka, Hans; Ma, Dongxia; Malmqvist, Per Åke; Müller, Thomas; Nenov, Artur; Olivucci, Massimo; Pedersen, Thomas Bondo; Peng, Daoling; Plasser, Felix; Pritchard, Ben; Reiher, Markus; Rivalta, Ivan; Schapiro, Igor; Segarra-Martí, Javier; Stenrup, Michael; Truhlar, Donald G; Ungur, Liviu; Valentini, Alessio; Vancoillie, Steven; Veryazov, Valera; Vysotskiy, Victor P; Weingart, Oliver; Zapata, Felipe; Lindh, Roland
2016-02-15
In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization. © 2015 Wiley Periodicals, Inc.
Probabilistic structural mechanics research for parallel processing computers
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.
1991-01-01
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.
A Novel Crosstalk Suppression Method of the 2-D Networked Resistive Sensor Array
Wu, Jianfeng; Wang, Lei; Li, Jianqing; Song, Aiguo
2014-01-01
The 2-D resistive sensor array in the row–column fashion suffered from the crosstalk problem for parasitic parallel paths. Firstly, we proposed an Improved Isolated Drive Feedback Circuit with Compensation (IIDFCC) based on the voltage feedback method to suppress the crosstalk. In this method, a compensated resistor was specially used to reduce the crosstalk caused by the column multiplexer resistors and the adjacent row elements. Then, a mathematical equivalent resistance expression of the element being tested (EBT) of this circuit was analytically derived and verified by the circuit simulations. The simulation results show that the measurement method can greatly reduce the influence on the EBT caused by parasitic parallel paths for the multiplexers' channel resistor and the adjacent elements. PMID:25046011
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Three numerical algorithms were compared to provide a solution of a radiative transfer equation (RTE) for plane albedo (hemispherical reflectance) in semi-infinite one-dimensional plane-parallel layer. Algorithms were based on the invariant imbedding method and two different var...
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Dynamic analysis and control of lightweight manipulators with flexible parallel link mechanisms
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1991-01-01
The flexible parallel link mechanism is designed for increased rigidity to sustain the buckling when it carries a heavy payload. Compared to a one link flexible manipulator, a two link flexible manipulator, especially the flexible parallel mechanism, has more complicated characteristics in dynamics and control. The objective of this research is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model. The step response of the analytical model and the TREETOPS model match each other well. The nonlinear dynamics is studied using a sinusoidal excitation. The actuator dynamic effect on a flexible robot was investigated. The effects are explained by the root loci and the Bode plot theoretically and experimentally. For the base performance for the advanced control scheme, a simple decoupled feedback scheme is applied.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sawyer, Darren Charles
1994-01-01
The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.
Data analytics and parallel-coordinate materials property charts
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.
2018-01-01
It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
Takeuchi, Masaki; Tsunoda, Hiromichi; Tanaka, Hideji; Shiramizu, Yoshimi
2011-01-01
This paper describes the performance of our automated acidic (CH(3)COOH, HCOOH, HCl, HNO(2), SO(2), and HNO(3)) gases monitor utilizing a parallel-plate wet denuder (PPWD). The PPWD quantitatively collects gaseous contaminants at a high sample flow rate (∼8 dm(3) min(-1)) compared to the conventional methods used in a clean room. Rapid response to any variability in the sample concentration enables near-real-time monitoring. In the developed monitor, the analyte collected with the PPWD is pumped into one of two preconcentration columns for 15 min, and determined by means of ion chromatography. While one preconcentration column is used for chromatographic separation, the other is used for loading the sample solution. The system allows continuous monitoring of the common acidic gases in an advanced semiconductor manufacturing clean room. 2011 © The Japan Society for Analytical Chemistry
NASA Astrophysics Data System (ADS)
Wang, Yue; Yu, Jingjun; Pei, Xu
2018-06-01
A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-20
... of animals in regulatory testing is anticipated to occur in parallel with an increased ability to... phylogenetically lower animal species (e.g., fish, worms), as well as high throughput whole genome analytical... result in test methods for toxicity testing that are more scientifically and economically efficient and...
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
Integrated multiplexed capillary electrophoresis system
Yeung, Edward S.; Tan, Hongdong
2002-05-14
The present invention provides an integrated multiplexed capillary electrophoresis system for the analysis of sample analytes. The system integrates and automates multiple components, such as chromatographic columns and separation capillaries, and further provides a detector for the detection of analytes eluting from the separation capillaries. The system employs multiplexed freeze/thaw valves to manage fluid flow and sample movement. The system is computer controlled and is capable of processing samples through reaction, purification, denaturation, pre-concentration, injection, separation and detection in parallel fashion. Methods employing the system of the invention are also provided.
PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules
NASA Astrophysics Data System (ADS)
Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.
2018-03-01
The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.
NASA Astrophysics Data System (ADS)
Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang
2003-02-01
DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.
Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver
NASA Technical Reports Server (NTRS)
Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)
2002-01-01
The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.
NASA Astrophysics Data System (ADS)
Stewart, L. K.
1997-11-01
An analytical method for determining amounts of cleavage-normal dissolution and cleavage-parallel shear movement that occurred between adjacent microlithons during crenulation cleavage seam formation within a deformed slate is developed for the progressive bulk inhomogeneous shortening (PBIS) mechanism of crenulation cleavage formation. The method utilises structural information obtained from samples where a diverging bed and vein are offset by a crenulation cleavage seam. Several samples analysed using this method produced ratios of relative, cleavage-parallel movement of microlithons to the material thickness removed by dissolution typically in the range of 1.1-3.4:1. The mean amount of solution shortening attributed to the formation of the cleavage seams examined is 24%. The results indicate that a relationship may exist between the width of microlithons and the amount of cleavage-parallel intermicrolithon-movement. The method presented here has the potential to help determine whether crenulation cleavage seams formed by the progressive bulk inhomogeneous shortening mechanism or by that involving cleavage-normal pressure solution alone.
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
A Bridge between Two Important Problems in Optics and Electrostatics
ERIC Educational Resources Information Center
Capelli, R.; Pozzi, G.
2008-01-01
It is shown how the same physically appealing method can be applied to find analytic solutions for two difficult and apparently unrelated problems in optics and electrostatics. They are: (i) the diffraction of a plane wave at a perfectly conducting thin half-plane and (ii) the electrostatic field associated with a parallel array of stripes held at…
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, Jean-Luc, E-mail: jlvay@lbl.gov; Haber, Irving; Godfrey, Brendan B.
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of themore » wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods.« less
A 2D MTF approach to evaluate and guide dynamic imaging developments.
Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno
2010-02-01
As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Hierarchical analytical and simulation modelling of human-machine systems with interference
NASA Astrophysics Data System (ADS)
Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.
2017-01-01
The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.
NASA Astrophysics Data System (ADS)
Sorokin, V. A.; Volkov, Yu V.; Sherstneva, A. I.; Botygin, I. A.
2016-11-01
This paper overviews a method of generating climate regions based on an analytic signal theory. When applied to atmospheric surface layer temperature data sets, the method allows forming climatic structures with the corresponding changes in the temperature to make conclusions on the uniformity of climate in an area and to trace the climate changes in time by analyzing the type group shifts. The algorithm is based on the fact that the frequency spectrum of the thermal oscillation process is narrow-banded and has only one mode for most weather stations. This allows using the analytic signal theory, causality conditions and introducing an oscillation phase. The annual component of the phase, being a linear function, was removed by the least squares method. The remaining phase fluctuations allow consistent studying of their coordinated behavior and timing, using the Pearson correlation coefficient for dependence evaluation. This study includes program experiments to evaluate the calculation efficiency in the phase grouping task. The paper also overviews some single-threaded and multi-threaded computing models. It is shown that the phase grouping algorithm for meteorological data can be parallelized and that a multi-threaded implementation leads to a 25-30% increase in the performance.
An equivalent viscoelastic model for rock mass with parallel joints
NASA Astrophysics Data System (ADS)
Li, Jianchun; Ma, Guowei; Zhao, Jian
2010-03-01
An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
Parallel and Scalable Clustering and Classification for Big Data in Geosciences
NASA Astrophysics Data System (ADS)
Riedel, M.
2015-12-01
Machine learning, data mining, and statistical computing are common techniques to perform analysis in earth sciences. This contribution will focus on two concrete and widely used data analytics methods suitable to analyse 'big data' in the context of geoscience use cases: clustering and classification. From the broad class of available clustering methods we focus on the density-based spatial clustering of appliactions with noise (DBSCAN) algorithm that enables the identification of outliers or interesting anomalies. A new open source parallel and scalable DBSCAN implementation will be discussed in the light of a scientific use case that detects water mixing events in the Koljoefjords. The second technique we cover is classification, with a focus set on the support vector machines algorithm (SVMs), as one of the best out-of-the-box classification algorithm. A parallel and scalable SVM implementation will be discussed in the light of a scientific use case in the field of remote sensing with 52 different classes of land cover types.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Xia, Yidong; Luo, Hong; Frisbey, Megan; ...
2014-07-01
A set of implicit methods are proposed for a third-order hierarchical WENO reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids. An attractive feature in these methods are the application of the Jacobian matrix based on the P1 element approximation, resulting in a huge reduction of memory requirement compared with DG (P2). Also, three approaches -- analytical derivation, divided differencing, and automatic differentiation (AD) are presented to construct the Jacobian matrix respectively, where the AD approach shows the best robustness. A variety of compressible flow problems are computed to demonstrate the fast convergence property of the implemented flowmore » solver. Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries indicate that this low-storage implicit method can provide a viable and attractive DG solution for complicated flows of practical importance.« less
NASA Astrophysics Data System (ADS)
Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.
2017-12-01
The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.
Final Report: PAGE: Policy Analytics Generation Engine
2016-08-12
develop a parallel framework for it. We also developed policies and methods by which a group of defensive resources (e.g. checkpoints) could be...Sarit Kraus. Learning to Reveal Information in Repeated Human -Computer Negotiation, Human -Agent Interaction Design and Models Workshop 2012. 04-JUN...Joseph Keshet, Sarit Kraus. Predicting Human Strategic Decisions Using Facial Expressions, International Joint Conference on Artificial
A Quick and Parallel Analytical Method Based on Quantum Dots Labeling for ToRCH-Related Antibodies
NASA Astrophysics Data System (ADS)
Yang, Hao; Guo, Qing; He, Rong; Li, Ding; Zhang, Xueqing; Bao, Chenchen; Hu, Hengyao; Cui, Daxiang
2009-12-01
Quantum dot is a special kind of nanomaterial composed of periodic groups of II-VI, III-V or IV-VI materials. Their high quantum yield, broad absorption with narrow photoluminescence spectra and high resistance to photobleaching, make them become a promising labeling substance in biological analysis. Here, we report a quick and parallel analytical method based on quantum dots for ToRCH-related antibodies including Toxoplasma gondii, Rubella virus, Cytomegalovirus and Herpes simplex virus type 1 (HSV1) and 2 (HSV2). Firstly, we fabricated the microarrays with the five kinds of ToRCH-related antigens and used CdTe quantum dots to label secondary antibody and then analyzed 100 specimens of randomly selected clinical sera from obstetric outpatients. The currently prevalent enzyme-linked immunosorbent assay (ELISA) kits were considered as “golden standard” for comparison. The results show that the quantum dots labeling-based ToRCH microarrays have comparable sensitivity and specificity with ELISA. Besides, the microarrays hold distinct advantages over ELISA test format in detection time, cost, operation and signal stability. Validated by the clinical assay, our quantum dots-based ToRCH microarrays have great potential in the detection of ToRCH-related pathogens.
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Moitra, Stuti
1996-01-01
Various tridiagonal solvers have been proposed in recent years for different parallel platforms. In this paper, the performance of three tridiagonal solvers, namely, the parallel partition LU algorithm, the parallel diagonal dominant algorithm, and the reduced diagonal dominant algorithm, is studied. These algorithms are designed for distributed-memory machines and are tested on an Intel Paragon and an IBM SP2 machines. Measured results are reported in terms of execution time and speedup. Analytical study are conducted for different communication topologies and for different tridiagonal systems. The measured results match the analytical results closely. In addition to address implementation issues, performance considerations such as problem sizes and models of speedup are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muljadi, Eduard; Hasan, Iftekhar; Husain, Tausif
In this paper, a nonlinear analytical model based on the Magnetic Equivalent Circuit (MEC) method is developed for a double-sided E-Core Transverse Flux Machine (TFM). The proposed TFM has a cylindrical rotor, sandwiched between E-core stators on both sides. Ferrite magnets are used in the rotor with flux concentrating design to attain high airgap flux density, better magnet utilization, and higher torque density. The MEC model was developed using a series-parallel combination of flux tubes to estimate the reluctance network for different parts of the machine including air gaps, permanent magnets, and the stator and rotor ferromagnetic materials, in amore » two-dimensional (2-D) frame. An iterative Gauss-Siedel method is integrated with the MEC model to capture the effects of magnetic saturation. A single phase, 1 kW, 400 rpm E-Core TFM is analytically modeled and its results for flux linkage, no-load EMF, and generated torque, are verified with Finite Element Analysis (FEA). The analytical model significantly reduces the computation time while estimating results with less than 10 percent error.« less
Current Status of Mycotoxin Analysis: A Critical Review.
Shephard, Gordon S
2016-07-01
It is over 50 years since the discovery of aflatoxins focused the attention of food safety specialists on fungal toxins in the feed and food supply. Since then, analysis of this important group of natural contaminants has advanced in parallel with general developments in analytical science, and current MS methods are capable of simultaneously analyzing hundreds of compounds, including mycotoxins, pesticides, and drugs. This profusion of data may advance our understanding of human exposure, yet constitutes an interpretive challenge to toxicologists and food safety regulators. Despite these advances in analytical science, the basic problem of the extreme heterogeneity of mycotoxin contamination, although now well understood, cannot be circumvented. The real health challenges posed by mycotoxin exposure occur in the developing world, especially among small-scale and subsistence farmers. Addressing these problems requires innovative approaches in which analytical science must also play a role in providing suitable out-of-laboratory analytical techniques.
Quasi-Newton parallel geometry optimization methods
NASA Astrophysics Data System (ADS)
Burger, Steven K.; Ayers, Paul W.
2010-07-01
Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.
A Green's function method for local and non-local parallel transport in general magnetic fields
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego; Chacón, Luis
2009-11-01
The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), χ, and the perpendicular, χ, conductivities (χ/χ may exceed 10^10 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields. The numerical implementation employs a volume-preserving field-line integrator [Finn and Chac'on, Phys. Plasmas, 12 (2005)] for an accurate representation of the magnetic field lines regardless of the level of stochasticity. The general formalism and its algorithmic properties are discussed along with illustrative analytical and numerical examples. Problems of particular interest include: the departures from the Rochester--Rosenbluth diffusive scaling in the weak magnetic chaos regime, the interplay between non-locality and chaos, and the robustness of transport barriers in reverse shear configurations.
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.
1978-01-01
The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen
With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less
Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R; Seliskar, Carl J; Limbach, Patrick A; Heineman, William R
2010-08-01
Parallel separations using CE on a multilane microchip with multiplexed LIF detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be determined in parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pK(a) determination of small molecule analytes is demonstrated with the multilane microchip.
Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R.; Seliskar, Carl J.; Limbach, Patrick A.; Heineman, William R.
2010-01-01
Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be analyzed on parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pKa determination of small molecule analytes is demonstrated with the multilane microchip. PMID:20737446
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.
Sublattice parallel replica dynamics.
Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F
2014-06-01
Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.
Analytic second derivatives of the energy in the fragment molecular orbital method
NASA Astrophysics Data System (ADS)
Nakata, Hiroya; Nagata, Takeshi; Fedorov, Dmitri G.; Yokojima, Satoshi; Kitaura, Kazuo; Nakamura, Shinichiro
2013-04-01
We developed the analytic second derivatives of the energy for the fragment molecular orbital (FMO) method. First we derived the analytic expressions and then introduced some approximations related to the first and second order coupled perturbed Hartree-Fock equations. We developed a parallel program for the FMO Hessian with approximations in GAMESS and used it to calculate infrared (IR) spectra and Gibbs free energies and to locate the transition states in SN2 reactions. The accuracy of the Hessian is demonstrated in comparison to ab initio results for polypeptides and a water cluster. By using the two residues per fragment division, we achieved the accuracy of 3 cm-1 in the reduced mean square deviation of vibrational frequencies from ab initio for all three polyalanine isomers, while the zero point energy had the error not exceeding 0.3 kcal/mol. The role of the secondary structure on IR spectra, zero point energies, and Gibbs free energies is discussed.
Analytical and numerical treatment of drift-tearing modes in plasma slab
NASA Astrophysics Data System (ADS)
Mirnov, V. V.; Hegna, C. C.; Sovinec, C. R.; Howell, E. C.
2016-10-01
Two-fluid corrections to linear tearing modes includes 1) diamagnetic drifts that reduce the growth rate and 2) electron and ion decoupling on short scales that can lead to fast reconnection. We have recently developed an analytical model that includes effects 1) and 2) and important contribution from finite electron parallel thermal conduction. Both the tendencies 1) and 2) are confirmed by an approximate analytic dispersion relation that is derived using a perturbative approach of small ion-sound gyroradius ρs. This approach is only valid at the beginning of the transition from the collisional to semi-collisional regimes. Further analytical and numerical work is performed to cover the full interval of ρs connecting these two limiting cases. Growth rates are computed from analytic theory with a shooting method. They match the resistive MHD regime with the dispersion relations known at asymptotically large ion-sound gyroradius. A comparison between this analytical treatment and linear numerical simulations using the NIMROD code with cold ions and hot electrons in plasma slab is reported. The material is based on work supported by the U.S. DOE and NSF.
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin
2018-06-01
The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.
NASA Astrophysics Data System (ADS)
Stupakov, Gennady; Zhou, Demin
2016-04-01
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stupakov, Gennady; Zhou, Demin
2016-04-21
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.
Watson, Nathanial E; Prebihalo, Sarah E; Synovec, Robert E
2017-08-29
Comprehensive three-dimensional gas chromatography with time-of-flight mass spectrometry (GC 3 -TOFMS) creates an opportunity to explore a new paradigm in chemometric analysis. Using this newly described instrument and the well understood Parallel Factor Analysis (PARAFAC) model we present one option for utilization of the novel GC 3 -TOFMS data structure. We present a method which builds upon previous work in both GC 3 and targeted analysis using PARAFAC to simplify some of the implementation challenges previously discovered. Conceptualizing the GC 3 -TOFMS instead as a one-dimensional gas chromatograph with GC × GC-TOFMS detection we allow the instrument to create the PARAFAC target window natively. Each first dimension modulation thus creates a full GC × GC-TOFMS chromatogram fully amenable to PARAFAC. A simple mixture of 115 compounds and a diesel sample are interrogated through this methodology. All test analyte targets are successfully identified in both mixtures. In addition, mass spectral matching of the PARAFAC loadings to library spectra yielded results greater than 900 in 40 of 42 test analyte cases. Twenty-nine of these cases produced match values greater than 950. Copyright © 2017 Elsevier B.V. All rights reserved.
Microchannel gel electrophoretic separation systems and methods for preparing and using
Herr, Amy E; Singh, Anup K; Throckmorton, Daniel J
2015-02-24
A micro-analytical platform for performing electrophoresis-based immunoassays was developed by integrating photopolymerized cross-linked polyacrylamide gels within a microfluidic device. The microfluidic immunoassays are performed by gel electrophoretic separation and quantifying analyte concentration based upon conventional polyacrylamide gel electrophoresis (PAGE). To retain biological activity of proteins and maintain intact immune complexes, native PAGE conditions were employed. Both direct (non-competitive) and competitive immunoassay formats are demonstrated in microchips for detecting toxins and biomarkers (cytokines, c-reactive protein) in bodily fluids (serum, saliva, oral fluids). Further, a description of gradient gels fabrication is included, in an effort to describe methods we have developed for further optimization of on-chip PAGE immunoassays. The described chip-based PAGE immunoassay method enables immunoassays that are fast (minutes) and require very small amounts of sample (less than a few microliters). Use of microfabricated chips as a platform enables integration, parallel assays, automation and development of portable devices.
Microchannel gel electrophoretic separation systems and methods for preparing and using
Herr, Amy; Singh, Anup K; Throckmorton, Daniel J
2013-09-03
A micro-analytical platform for performing electrophoresis-based immunoassays was developed by integrating photopolymerized cross-linked polyacrylamide gels within a microfluidic device. The microfluidic immunoassays are performed by gel electrophoretic separation and quantifying analyte concentration based upon conventional polyacrylamide gel electrophoresis (PAGE). To retain biological activity of proteins and maintain intact immune complexes, native PAGE conditions were employed. Both direct (non-competitive) and competitive immunoassay formats are demonstrated in microchips for detecting toxins and biomarkers (cytokines, c-reactive protein) in bodily fluids (serum, saliva, oral fluids). Further, a description of gradient gels fabrication is included, in an effort to describe methods we have developed for further optimization of on-chip PAGE immunoassays. The described chip-based PAGE immunoassay method enables immunoassays that are fast (minutes) and require very small amounts of sample (less than a few microliters). Use of microfabricated chips as a platform enables integration, parallel assays, automation and development of portable devices.
Efficient Iterative Methods Applied to the Solution of Transonic Flows
NASA Astrophysics Data System (ADS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.
1996-02-01
We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.
Rubio, L; Ortiz, M C; Sarabia, L A
2014-04-11
A non-separative, fast and inexpensive spectrofluorimetric method based on the second order calibration of excitation-emission fluorescence matrices (EEMs) was proposed for the determination of carbaryl, carbendazim and 1-naphthol in dried lime tree flowers. The trilinearity property of three-way data was used to handle the intrinsic fluorescence of lime flowers and the difference in the fluorescence intensity of each analyte. It also made possible to identify unequivocally each analyte. Trilinearity of the data tensor guarantees the uniqueness of the solution obtained through parallel factor analysis (PARAFAC), so the factors of the decomposition match up with the analytes. In addition, an experimental procedure was proposed to identify, with three-way data, the quenching effect produced by the fluorophores of the lime flowers. This procedure also enabled the selection of the adequate dilution of the lime flowers extract to minimize the quenching effect so the three analytes can be quantified. Finally, the analytes were determined using the standard addition method for a calibration whose standards were chosen with a D-optimal design. The three analytes were unequivocally identified by the correlation between the pure spectra and the PARAFAC excitation and emission spectral loadings. The trueness was established by the accuracy line "calculated concentration versus added concentration" in all cases. Better decision limit values (CCα), in x0=0 with the probability of false positive fixed at 0.05, were obtained for the calibration performed in pure solvent: 2.97 μg L(-1) for 1-naphthol, 3.74 μg L(-1) for carbaryl and 23.25 μg L(-1) for carbendazim. The CCα values for the second calibration carried out in matrix were 1.61, 4.34 and 51.75 μg L(-1) respectively; while the values obtained considering only the pure samples as calibration set were: 2.65, 8.61 and 28.7 μg L(-1), respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
Napolitano, José G.; Gödecke, Tanja; Lankin, David C.; Jaki, Birgit U.; McAlpine, James B.; Chen, Shao-Nong; Pauli, Guido F.
2013-01-01
The development of analytical methods for parallel characterization of multiple phytoconstituents is essential to advance the quality control of herbal products. While chemical standardization is commonly carried out by targeted analysis using gas or liquid chromatography-based methods, more universal approaches based on quantitative 1H NMR (qHNMR) measurements are being used increasingly in the multi-targeted assessment of these complex mixtures. The present study describes the development of a 1D qHNMR-based method for simultaneous identification and quantification of green tea constituents. This approach utilizes computer-assisted 1H iterative Full Spin Analysis (HiFSA) and enables rapid profiling of seven catechins in commercial green tea extracts. The qHNMR results were cross-validated against quantitative profiles obtained with an orthogonal LC-MS/MS method. The relative strengths and weaknesses of both approaches are discussed, with special emphasis on the role of identical reference standards in qualitative and quantitative analyses. PMID:23870106
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
Gooding, Thomas Michael [Rochester, MN
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
Experimental and Analytical Determinations of Spiral Bevel Gear-Tooth Bending Stress Compared
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
2000-01-01
Spiral bevel gears are currently used in all main-rotor drive systems for rotorcraft produced in the United States. Applications such as these need spiral bevel gears to turn the corner from the horizontal gas turbine engine to the vertical rotor shaft. These gears must typically operate at extremely high rotational speeds and carry high power levels. With these difficult operating conditions, an improved analytical capability is paramount to increasing aircraft safety and reliability. Also, literature on the analysis and testing of spiral bevel gears has been very sparse in comparison to that for parallel axis gears. This is due to the complex geometry of this type of gear and to the specialized test equipment necessary to test these components. To develop an analytical model of spiral bevel gears, researchers use differential geometry methods to model the manufacturing kinematics. A three-dimensional spiral bevel gear modeling method was developed that uses finite elements for the structural analysis. This method was used to analyze the three-dimensional contact pattern between the test pinion and gear used in the Spiral Bevel Gear Test Facility at the NASA Glenn Research Center at Lewis Field. Results of this analysis are illustrated in the preceding figure. The development of the analytical method was a joint endeavor between NASA Glenn, the U.S. Army Research Laboratory, and the University of North Dakota.
Parallel implementation of Hartree-Fock and density functional theory analytical second derivatives
NASA Astrophysics Data System (ADS)
Baker, Jon; Wolinski, Krzysztof; Malagoli, Massimo; Pulay, Peter
2004-01-01
We present an efficient, parallel implementation for the calculation of Hartree-Fock and density functional theory analytical Hessian (force constant, nuclear second derivative) matrices. These are important for the determination of harmonic vibrational frequencies, and to classify stationary points on potential energy surfaces. Our program is designed for modest parallelism (4-16 CPUs) as exemplified by our standard eight-processor QuantumCube™. We can routinely handle systems with up to 100+ atoms and 1000+ basis functions using under 0.5 GB of RAM memory per CPU. Timings are presented for several systems, ranging in size from aspirin (C9H8O4) to nickel octaethylporphyrin (C36H44N4Ni).
Cryogenic parallel, single phase flows: an analytical approach
NASA Astrophysics Data System (ADS)
Eichhorn, R.
2017-02-01
Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.
Towards nonaxisymmetry; initial results using the Flux Coordinate Independent method in BOUT++
NASA Astrophysics Data System (ADS)
Shanahan, B. W.; Hill, P.; Dudson, B. D.
2016-11-01
Fluid simulation of stellarator edge transport is difficult due to the complexities of mesh generation; the stochastic edge and strong nonaxisymmetry inhibit the use of field aligned coordinate systems. The recent implementation of the Flux Coordinate Independent method for calculating parallel derivatives in BOUT++ has allowed for more complex geometries. Here we present initial results of nonaxisymmetric diffusion modelling as a step towards stellarator turbulence modelling. We then present initial (non-turbulent) transport modelling using the FCI method and compare the results with analytical calculations. The prospects for future stellarator transport and turbulence modelling are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stupakov, Gennady; Zhou, Demin
2016-04-21
We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. Furthermore, all our formulas are benchmarked against numerical simulations with the CSRZ computermore » code.« less
HPLC-Based Method to Evaluate Kinetics of Glucosinolate Hydrolysis by Sinapis alba Myrosinase1
Vastenhout, Kayla J.; Tornberg, Ruthellen H.; Johnson, Amanda L.; Amolins, Michael W.; Mays, Jared R.
2014-01-01
Isothiocyanates (ITCs) are one of several hydrolysis products of glucosinolates, plant secondary metabolites which are substrates for the thioglucohydrolase myrosinase. Recent pursuits toward the development of synthetic, non-natural ITCs have consequently led to an exploration of generating these compounds from non-natural glucosinolate precursors. Evaluation of the myrosinase-dependent conversion of select non-natural glucosinolates to non-natural ITCs cannot be accomplished using established UV-Vis spectroscopic methods. To overcome this limitation, an alternative HPLC-based analytical approach was developed where initial reaction velocities were generated from non-linear reaction progress curves. Validation of this HPLC method was accomplished through parallel evaluation of three glucosinolates with UV-Vis methodology. The results of this study demonstrate that kinetic data is consistent between both analytical methods and that the tested glucosinolates respond similarly to both Michaelis–Menten and specific activity analyses. Consequently, this work resulted in the complete kinetic characterization of three glucosinolates with Sinapis alba myrosinase, with results that were consistent with previous reports. PMID:25068719
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
Moving from Within The Maternal: The Choreography of Analytic Eroticism.
Elise, Dianne
2017-02-01
With Kristeva's concept of maternal eroticism (2014) as starting point, the "multiverse" of mother/child erotic sensibilities-the dance of the semiotic chora-is explored and a parallel engagement proposed within the analytic dyad. The dance of psychoanalysis is not the creative product of the patient's mind alone. Clinical work invites, requires, a choreographic engagement by the clinician in interplay with the patient. The clinician's analytic activity is thus akin to choreography: the structuring of a dance, or of a session, expresses an inner impulse brought into narrative form. The embodied art of dance parallels the clinician's creative vitality in contributing to the shaping of the movement of a session. Through formulation of an analytic eroticism, the terrain of what traditionally has been viewed as erotic transference and countertransference can be expanded to clinical benefit.
Analytical and numerical study of electroosmotic slip flows of fractional second grade fluids
NASA Astrophysics Data System (ADS)
Wang, Xiaoping; Qi, Haitao; Yu, Bo; Xiong, Zhen; Xu, Huanying
2017-09-01
This work investigates the unsteady electroosmotic slip flow of viscoelastic fluid through a parallel plate micro-channel under combined influence of electroosmotic and pressure gradient forcings with asymmetric zeta potentials at the walls. The generalized second grade fluid with fractional derivative was used for the constitutive equation. The Navier slip model with different slip coefficients at both walls was also considered. By employing the Debye-Hückel linearization and the Laplace and sin-cos-Fourier transforms, the analytical solutions for the velocity distribution are derived. And the finite difference method for this problem was also given. Finally, the influence of pertinent parameters on the generation of flow is presented graphically.
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less
Research on bathymetry estimation by Worldview-2 based with the semi-analytical model
NASA Astrophysics Data System (ADS)
Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.
2015-04-01
South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, A N; Turchin, I V
2013-12-31
The method of optical coherence tomography with the scheme of parallel reception of the interference signal (P-OCT) is developed on the basis of spatial paralleling of the reference wave by means of a phase diffraction grating producing the appropriate time delay in the Mach–Zehnder interferometer. The absence of mechanical variation of the optical path difference in the interferometer essentially reduces the time required for 2D imaging of the object internal structure, as compared to the classical OCT that uses the time-domain method of the image construction, the sensitivity and the dynamic range being comparable in both approaches. For the resultingmore » field of the interfering object and reference waves an analytical expression is derived that allows the calculation of the autocorrelation function in the plane of photodetectors. For the first time a method of linear phase modulation by 2π is proposed for P-OCT systems, which allows the use of compact high-frequency (a few hundred kHz) piezoelectric cell-based modulators. For the demonstration of the P-OCT method an experimental setup was created, using which the images of the inner structure of biological objects at the depth up to 1 mm with the axial spatial resolution of 12 μm were obtained. (optical coherence tomography)« less
A parallel direct-forcing fictitious domain method for simulating microswimmers
NASA Astrophysics Data System (ADS)
Gao, Tong; Lin, Zhaowu
2017-11-01
We present a 3D parallel direct-forcing fictitious domain method for simulating swimming micro-organisms at small Reynolds numbers. We treat the motile micro-swimmers as spherical rigid particles using the ``Squirmer'' model. The particle dynamics are solved on the moving Larangian meshes that overlay upon a fixed Eulerian mesh for solving the fluid motion, and the momentum exchange between the two phases is resolved by distributing pseudo body-forces over the particle interior regions which constrain the background fictitious fluids to follow the particle movement. While the solid and fluid subproblems are solved separately, no inner-iterations are required to enforce numerical convergence. We demonstrate the accuracy and robustness of the method by comparing our results with the existing analytical and numerical studies for various cases of single particle dynamics and particle-particle interactions. We also perform a series of numerical explorations to obtain statistical and rheological measurements to characterize the dynamics and structures of Squirmer suspensions. NSF DMS 1619960.
Noise radiation directivity from a wind-tunnel inlet with inlet vanes and duct wall linings
NASA Technical Reports Server (NTRS)
Soderman, P. T.; Phillips, J. D.
1986-01-01
The acoustic radiation patterns from a 1/15th scale model of the Ames 80- by 120-Ft Wind Tunnel test section and inlet have been measured with a noise source installed in the test section. Data were acquired without airflow in the duct. Sound-absorbent inlet vanes oriented parallel to each other, or splayed with a variable incidence relative to the duct long axis, were evaluated along with duct wall linings. Results show that splayed vans tend to spread the sound to greater angles than those measured with the open inlet. Parallel vanes narrowed the high-frequency radiation pattern. Duct wall linings had a strong effect on acoustic directivity by attenuating wall reflections. Vane insertion loss was measured. Directivity results are compared with existing data from square ducts. Two prediction methods for duct radiation directivity are described: one is an empirical method based on the test data, and the other is a analytical method based on ray acoustics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naito, O.
2015-08-15
An analytic formula has been derived for the relativistic incoherent Thomson backscattering spectrum for a drifting anisotropic plasma when the scattering vector is parallel to the drifting direction. The shape of the scattering spectrum is insensitive to the electron temperature perpendicular to the scattering vector, but its amplitude may be modulated. As a result, while the measured temperature correctly represents the electron distribution parallel to the scattering vector, the electron density may be underestimated when the perpendicular temperature is higher than the parallel temperature. Since the scattering spectrum in shorter wavelengths is greatly enhanced by the existence of drift, themore » diagnostics might be used to measure local electron current density in fusion plasmas.« less
Lattice Boltzmann approach for complex nonequilibrium flows.
Montessori, A; Prestininzi, P; La Rocca, M; Succi, S
2015-10-01
We present a lattice Boltzmann realization of Grad's extended hydrodynamic approach to nonequilibrium flows. This is achieved by using higher-order isotropic lattices coupled with a higher-order regularization procedure. The method is assessed for flow across parallel plates and three-dimensional flows in porous media, showing excellent agreement of the mass flow with analytical and numerical solutions of the Boltzmann equation across the full range of Knudsen numbers, from the hydrodynamic regime to ballistic motion.
Efficient iterative methods applied to the solution of transonic flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.
1996-02-01
We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less
Considerations on the Use of Custom Accelerators for Big Data Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Minutoli, Marco
Accelerators, including Graphic Processing Units (GPUs) for gen- eral purpose computation, many-core designs with wide vector units (e.g., Intel Phi), have become a common component of many high performance clusters. The appearance of more stable and reliable tools tools that can automatically convert code written in high-level specifications with annotations (such as C or C++) to hardware de- scription languages (High-Level Synthesis - HLS), is also setting the stage for a broader use of reconfigurable devices (e.g., Field Pro- grammable Gate Arrays - FPGAs) in high performance system for the implementation of custom accelerators, helped by the fact that newmore » processors include advanced cache-coherent interconnects for these components. In this chapter, we briefly survey the status of the use of accelerators in high performance systems targeted at big data analytics applications. We argue that, although the progress in the use of accelerators for this class of applications has been sig- nificant, differently from scientific simulations there still are gaps to close. This is particularly true for the ”irregular” behaviors exhibited by no-SQL, graph databases. We focus our attention on the limits of HLS tools for data analytics and graph methods, and discuss a new architectural template that better fits the requirement of this class of applications. We validate the new architectural templates by mod- ifying the Graph Engine for Multithreaded System (GEMS) frame- work to support accelerators generated with such a methodology, and testing with queries coming from the Lehigh University Benchmark (LUBM). The architectural template enables better supporting the task and memory level parallelism present in graph methods by sup- porting a new control model and a enhanced memory interface. We show that out solution allows generating parallel accelerators, pro- viding speed ups with respect to conventional HLS flows. We finally draw conclusions and present a perspective on the use of reconfig- urable devices and Design Automation tools for data analytics.« less
Strotmann, Uwe; Reuschenbach, Peter; Schwarz, Helmut; Pagga, Udo
2004-01-01
Well-established biodegradation tests use biogenously evolved carbon dioxide (CO2) as an analytical parameter to determine the ultimate biodegradability of substances. A newly developed analytical technique based on the continuous online measurement of conductivity showed its suitability over other techniques. It could be demonstrated that the method met all criteria of established biodegradation tests, gave continuous biodegradation curves, and was more reliable than other tests. In parallel experiments, only small variations in the biodegradation pattern occurred. When comparing the new online CO2 method with existing CO2 evolution tests, growth rates and lag periods were similar and only the final degree of biodegradation of aniline was slightly lower. A further test development was the unification and parallel measurement of all three important summary parameters for biodegradation—i.e., CO2 evolution, determination of the biochemical oxygen demand (BOD), and removal of dissolved organic carbon (DOC)—in a multicomponent biodegradation test system (MCBTS). The practicability of this test method was demonstrated with aniline. This test system had advantages for poorly water-soluble and highly volatile compounds and allowed the determination of the carbon fraction integrated into biomass (heterotrophic yield). The integrated online measurements of CO2 and BOD systems produced continuous degradation curves, which better met the stringent criteria of ready biodegradability (60% biodegradation in a 10-day window). Furthermore the data could be used to calculate maximal growth rates for the modeling of biodegradation processes. PMID:15294794
Generalized constitutive equations for piezo-actuated compliant mechanism
NASA Astrophysics Data System (ADS)
Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin
2016-09-01
This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.
Coupling between structure and liquids in a parallel stage space shuttle design
NASA Technical Reports Server (NTRS)
Kana, D. D.; Ko, W. L.; Francis, P. H.; Nagy, A.
1972-01-01
A study was conducted to determine the influence of liquid propellants on the dynamic loads for space shuttle vehicles. A parallel-stage configuration model was designed and tested to determine the influence of liquid propellants on coupled natural modes. A forty degree-of-freedom analytical model was also developed for predicting these modes. Currently available analytical models were used to represent the liquid contributions, even though coupled longitudinal and lateral motions are present in such a complex structure. Agreement between the results was found in the lower few modes.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Blom, H; Gösch, M
2004-04-01
The past few years we have witnessed a tremendous surge of interest in so-called array-based miniaturised analytical systems due to their value as extremely powerful tools for high-throughput sequence analysis, drug discovery and development, and diagnostic tests in medicine (see articles in Issue 1). Terminologies that have been used to describe these array-based bioscience systems include (but are not limited to): DNA-chip, microarrays, microchip, biochip, DNA-microarrays and genome chip. Potential technological benefits of introducing these miniaturised analytical systems include improved accuracy, multiplexing, lower sample and reagent consumption, disposability, and decreased analysis times, just to mention a few examples. Among the many alternative principles of detection-analysis (e.g.chemiluminescence, electroluminescence and conductivity), fluorescence-based techniques are widely used, examples being fluorescence resonance energy transfer, fluorescence quenching, fluorescence polarisation, time-resolved fluorescence, and fluorescence fluctuation spectroscopy (see articles in Issue 11). Time-dependent fluctuations of fluorescent biomolecules with different molecular properties, like molecular weight, translational and rotational diffusion time, colour and lifetime, potentially provide all the kinetic and thermodynamic information required in analysing complex interactions. In this mini-review article, we present recent extensions aimed to implement parallel laser excitation and parallel fluorescence detection that can lead to even further increase in throughput in miniaturised array-based analytical systems. We also report on developments and characterisations of multiplexing extension that allow multifocal laser excitation together with matched parallel fluorescence detection for parallel confocal dynamical fluorescence fluctuation studies at the single biomolecule level.
Robles-Molina, José; Gilbert-López, Bienvenida; García-Reyes, Juan F; Molina-Díaz, Antonio
2017-09-29
Pesticide testing of foodstuffs is usually accomplished with generic wide-scope multi-residue methods based on liquid chromatography tandem mass spectrometry (LC-MS/MS). However, this approach does not cover some special pesticides, the so called "single-residue method" compounds, that are hardly compatible with standard reversed-phase (RP) separations due to their specific properties. In this article, we propose a comprehensive strategy for the integration of single residue method compounds and standard multiresidue pesticides within a single run. It is based on the use of a parallel LC column assembly with two different LC gradients performing orthogonal hydrophilic interaction chromatography (HILIC) and reversed-phase (RPLC) chromatography within one analytical run. Two sample aliquots were simultaneously injected on each column, using different gradients, being the eluents merged post-column prior to mass spectrometry detection. The approach was tested with 41 multiclass pesticides covering a wide range of physicochemical properties across several orders of log K ow (from -4 to +5.5). With this assembly, distinct separation from the void was attained for all the pesticides studied, keeping similar performance in terms of sensitivity, peak area reproducibility (<6 RSD% in most cases) and retention time stability of standard single column approaches (better than±0.1min). The application of the proposed approach using parallel HILIC/RPLC and RPLC/aqueous normal phase (Obelisc) were assessed in leek using LC-MS/MS. For this purpose, a hybrid QuEChERS (Quick, easy, cheap, effective, rugged and safe)/QuPPe (quick method for polar pesticides) method was evaluated based on solvent extraction with MeOH and acetonitrile followed by dispersive solid-phase extraction, delivering appropriate recoveries for most of the pesticides included in the study within the log K ow in the range from -4 to +5.5. The proposed strategy may be extended to other fields such as sport drug testing or environmental analysis, where the same type of variety of analytes featuring poor retention within a single chromatographic separation occurs. Copyright © 2017 Elsevier B.V. All rights reserved.
Improved DNA hybridization parameters by Twisted Intercalating Nucleic Acid (TINA).
Schneider, Uffe Vest
2012-01-01
This thesis establishes oligonucleotide design rules and applications of a novel group of DNA stabilizing molecules collectively called Twisted Intercalating Nucleic Acid - TINA. Three peer-reviewed publications form the basis for the thesis. One publication describes an improved and rapid method for determination of DNA melting points and two publications describe the effects of positioning TINA molecules in parallel triplex helix and antiparallel duplex helix forming DNA structures. The third publication establishes that TINA molecules containing oligonucleotides improve an antiparallel duplex hybridization based capture assay's analytical sensitivity compared to conventionel DNA oligonucleotides. Clinical microbiology is traditionally based on pathogenic microorganisms' culture and serological tests. The introduction of DNA target amplification methods like PCR has improved the analytical sensitivity and total turn around time involved in clinical diagnostics of infections. Due to the relatively weak hybridization between the two strands of double stranded DNA, a number of nucleic acid stabilizing molecules have been developed to improve the sensitivity of DNA based diagnostics through superior binding properties. A short introduction is given to Watson-Crick and Hoogsteen based DNA binding and the derived DNA structures. A number of other nucleic acid stabilizing molecules are described. The stabilizing effect of TINA molecules on different DNA structures is discussed and considered in relation to other nucleic acid stabilizing molecules and in relation to future use of TINA containing oligonucleotides in clinical diagnostics and therapy. In conclusion, design of TINA modified oligonucleotides for antiparallel duplex helixes and parallel triplex helixes follows simple purpose dependent rules. TINA molecules are well suited for improving multiplex PCR assays and can be used as part of novel technologies. Future research should test whether combinations of TINA molecules and other nucleic acid stabilizing molecules can increase analytical sensitivity whilst maintaining nucleobase mismatch discrimination in triplex helix based diagnostic assays.
NASA Astrophysics Data System (ADS)
Shahzad, M.; Rizvi, H.; Panwar, A.; Ryu, C. M.
2017-06-01
We have re-visited the existence criterion of the reverse shear Alfven eigenmodes (RSAEs) in the presence of the parallel equilibrium current by numerically solving the eigenvalue equation using a fast eigenvalue solver code KAES. The parallel equilibrium current can bring in the kink effect and is known to be strongly unfavorable for the RSAE. We have numerically estimated the critical value of the toroidicity factor Qtor in a circular tokamak plasma, above which RSAEs can exist, and compared it to the analytical one. The difference between the numerical and analytical critical values is small for low frequency RSAEs, but it increases as the frequency of the mode increases, becoming greater for higher poloidal harmonic modes.
Enabling the High Level Synthesis of Data Analytics Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less
Fitz, Brian D; Mannion, Brandyn C; To, Khang; Hoac, Trinh; Synovec, Robert E
2015-05-01
Low thermal mass gas chromatography (LTM-GC) was evaluated for rapid, high peak capacity separations with three injection methods: liquid, headspace solid phase micro-extraction (HS-SPME), and direct vapor. An Agilent LTM equipped with a short microbore capillary column was operated at a column heating rate of 250 °C/min to produce a 60s separation. Two sets of experiments were conducted in parallel to characterize the instrumental platform. First, the three injection methods were performed in conjunction with in-house built high-speed cryo-focusing injection (HSCFI) to cryogenically trap and re-inject the analytes onto the LTM-GC column in a narrower band. Next, the three injection methods were performed natively with LTM-GC. Using HSCFI, the peak capacity of a separation of 50 nl of a 73 component liquid test mixture was 270, which was 23% higher than without HSCFI. Similar peak capacity gains were obtained when using the HSCFI with HS-SPME (25%), and even greater with vapor injection (56%). For the 100 μl vapor sample injected without HSCFI, the preconcentration factor, defined as the ratio of the maximum concentration of the detected analyte peak relative to the analyte concentration injected with the syringe, was determined to be 11 for the earliest eluting peak (most volatile analyte). In contrast, the preconcentration factor for the earliest eluting peak using HSCFI was 103. Therefore, LTM-GC is demonstrated to natively provide in situ analyte trapping, although not to as great an extent as with HSCFI. We also report the use of LTM-GC applied with time-of-flight mass spectrometry (TOFMS) detection for rapid, high peak capacity separations from SPME sampled banana peel headspace. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation of the enthalpy method by means of analytical solution
NASA Astrophysics Data System (ADS)
Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika
2014-05-01
Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.
Multi-analyte profiling of inflammatory mediators in COPD sputum--the effects of processing.
Pedersen, Frauke; Holz, Olaf; Lauer, Gereon; Quintini, Gianluca; Kiwull-Schöne, Heidrun; Kirsten, Anne-Marie; Magnussen, Helgo; Rabe, Klaus F; Goldmann, Torsten; Watz, Henrik
2015-02-01
Prior to using a new multi-analyte platform for the detection of markers in sputum it is advisable to assess whether sputum processing, especially mucus homogenization by dithiothreitol (DTT), affects the analysis. In this study we tested a novel Human Inflammation Multi Analyte Profiling® Kit (v1.0 Luminex platform; xMAP®). Induced sputum samples of 20 patients with stable COPD (mean FEV1, 59.2% pred.) were processed in parallel using standard processing (with DTT) and a more time consuming sputum dispersion method with phosphate buffered saline (PBS) only. A panel of 47 markers was analyzed in these sputum supernatants by the xMAP®. Twenty-five of 47 analytes have been detected in COPD sputum. Interestingly, 7 markers have been detected in sputum processed with DTT only, or significantly higher levels were observed following DTT treatment (VDBP, α-2-Macroglobulin, haptoglobin, α-1-antitrypsin, VCAM-1, and fibrinogen). However, standard DTT-processing resulted in lower detectable concentrations of ferritin, TIMP-1, MCP-1, MIP-1β, ICAM-1, and complement C3. The correlation between processing methods for the different markers indicates that DTT processing does not introduce a bias by affecting individual sputum samples differently. In conclusion, our data demonstrates that the Luminex-based xMAP® panel can be used for multi-analyte profiling of COPD sputum using the routinely applied method of sputum processing with DTT. However, researchers need to be aware that the absolute concentration of selected inflammatory markers can be affected by DTT. Copyright © 2014 Elsevier Ltd. All rights reserved.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
NASA Astrophysics Data System (ADS)
Yang, Jianwen
2012-04-01
A general analytical solution is derived by using the Laplace transformation to describe transient reactive silica transport in a conceptualized 2-D system involving a set of parallel fractures embedded in an impermeable host rock matrix, taking into account of hydrodynamic dispersion and advection of silica transport along the fractures, molecular diffusion from each fracture to the intervening rock matrix, and dissolution of quartz. A special analytical solution is also developed by ignoring the longitudinal hydrodynamic dispersion term but remaining other conditions the same. The general and special solutions are in the form of a double infinite integral and a single infinite integral, respectively, and can be evaluated using Gauss-Legendre quadrature technique. A simple criterion is developed to determine under what conditions the general analytical solution can be approximated by the special analytical solution. It is proved analytically that the general solution always lags behind the special solution, unless a dimensionless parameter is less than a critical value. Several illustrative calculations are undertaken to demonstrate the effect of fracture spacing, fracture aperture and fluid flow rate on silica transport. The analytical solutions developed here can serve as a benchmark to validate numerical models that simulate reactive mass transport in fractured porous media.
Statistical properties of Charney-Hasegawa-Mima zonal flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.
2015-05-15
A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less
NASA Technical Reports Server (NTRS)
Kirk, R. G.; Nicholas, J. C.; Donald, G. H.; Murphy, R. C.
1980-01-01
The summary of a complete analytical design evaluation of an existing parallel flow compressor is presented and a field vibration problem that manifested itself as a subsynchronous vibration that tracked at approximately 2/3 of compressor speed is reviewed. The comparison of predicted and observed peak response speeds, frequency spectrum content, and the performance of the bearing-seal systems are presented as the events of the field problem are reviewed. Conclusions and recommendations are made as to the degree of accuracy of the analytical techniques used to evaluate the compressor design.
Petrenko, Taras; Kossmann, Simone; Neese, Frank
2011-02-07
In this paper, we present the implementation of efficient approximations to time-dependent density functional theory (TDDFT) within the Tamm-Dancoff approximation (TDA) for hybrid density functionals. For the calculation of the TDDFT/TDA excitation energies and analytical gradients, we combine the resolution of identity (RI-J) algorithm for the computation of the Coulomb terms and the recently introduced "chain of spheres exchange" (COSX) algorithm for the calculation of the exchange terms. It is shown that for extended basis sets, the RIJCOSX approximation leads to speedups of up to 2 orders of magnitude compared to traditional methods, as demonstrated for hydrocarbon chains. The accuracy of the adiabatic transition energies, excited state structures, and vibrational frequencies is assessed on a set of 27 excited states for 25 molecules with the configuration interaction singles and hybrid TDDFT/TDA methods using various basis sets. Compared to the canonical values, the typical error in transition energies is of the order of 0.01 eV. Similar to the ground-state results, excited state equilibrium geometries differ by less than 0.3 pm in the bond distances and 0.5° in the bond angles from the canonical values. The typical error in the calculated excited state normal coordinate displacements is of the order of 0.01, and relative error in the calculated excited state vibrational frequencies is less than 1%. The errors introduced by the RIJCOSX approximation are, thus, insignificant compared to the errors related to the approximate nature of the TDDFT methods and basis set truncation. For TDDFT/TDA energy and gradient calculations on Ag-TB2-helicate (156 atoms, 2732 basis functions), it is demonstrated that the COSX algorithm parallelizes almost perfectly (speedup ~26-29 for 30 processors). The exchange-correlation terms also parallelize well (speedup ~27-29 for 30 processors). The solution of the Z-vector equations shows a speedup of ~24 on 30 processors. The parallelization efficiency for the Coulomb terms can be somewhat smaller (speedup ~15-25 for 30 processors), but their contribution to the total calculation time is small. Thus, the parallel program completes a Becke3-Lee-Yang-Parr energy and gradient calculation on the Ag-TB2-helicate in less than 4 h on 30 processors. We also present the necessary extension of the Lagrangian formalism, which enables the calculation of the TDDFT excited state properties in the frozen-core approximation. The algorithms described in this work are implemented into the ORCA electronic structure system.
Analysis of composite ablators using massively parallel computation
NASA Technical Reports Server (NTRS)
Shia, David
1995-01-01
In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.
Asymmetry in the Farley-Buneman dispersion relation caused by parallel electric fields
NASA Astrophysics Data System (ADS)
Forsythe, Victoriya V.; Makarevich, Roman A.
2016-11-01
An implicit assumption utilized in studies of E region plasma waves generated by the Farley-Buneman instability (FBI) is that the FBI dispersion relation and its solutions for the growth rate and phase velocity are perfectly symmetric with respect to the reversal of the wave propagation component parallel to the magnetic field. In the present study, a recently derived general dispersion relation that describes fundamental plasma instabilities in the lower ionosphere including FBI is considered and it is demonstrated that the dispersion relation is symmetric only for background electric fields that are perfectly perpendicular to the magnetic field. It is shown that parallel electric fields result in significant differences between the growth rates and phase velocities for propagation of parallel components of opposite signs. These differences are evaluated using numerical solutions of the general dispersion relation and shown to exhibit an approximately linear relationship with the parallel electric field near the E region peak altitude of 110 km. An analytic expression for the differences is also derived from an approximate version of the dispersion relation, with comparisons between numerical and analytic results agreeing near 110 km. It is further demonstrated that parallel electric fields do not change the overall symmetry when the full 3-D wave propagation vector is reversed, with no symmetry seen when either the perpendicular or parallel component is reversed. The present results indicate that moderate-to-strong parallel electric fields of 0.1-1.0 mV/m can result in experimentally measurable differences between the characteristics of plasma waves with parallel propagation components of opposite polarity.
Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2011-01-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061
Methods for synthesizing findings on moderation effects across multiple randomized trials.
Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana
2013-04-01
This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.
NASA Astrophysics Data System (ADS)
Nurlybek, A. Ispulov; Abdul, Qadir; M, A. Shah; Ainur, K. Seythanova; Tanat, G. Kissikov; Erkin, Arinov
2016-03-01
The thermoelastic wave propagation in a tetragonal syngony anisotropic medium of classes 4, 4/m having heterogeneity along z axis has been investigated by employing matrizant method. This medium has an axis of second-order symmetry parallel to z axis. In the case of the fourth-order matrix coefficients, the problems of wave refraction and reflection on the interface of homogeneous anisotropic thermoelastic mediums are solved analytically.
Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael
2018-06-05
The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.
Fernández-Ruiz, Ramón; Redrejo, María Jesús; Friedrich, Eberhardt Josué; Ramos, Milagros; Fernández, Tamara
2014-08-05
This work presents the first application of total-reflection X-ray fluorescence (TXRF) spectrometry, a new and powerful alternative analytical method, to evaluation of the bioaccumulation kinetics of gold nanorods (GNRs) in various tissues upon intravenous administration in mice. The analytical parameters for developed methodology by TXRF were evaluated by means of the parallel analysis of bovine liver certified reference material samples (BCR-185R) doped with 10 μg/g gold. The average values (n = 5) achieved for gold measurements in lyophilized tissue weight were as follows: recovery 99.7%, expanded uncertainty (k = 2) 7%, repeatability 1.7%, detection limit 112 ng/g, and quantification limit 370 ng/g. The GNR bioaccumulation kinetics was analyzed in several vital mammalian organs such as liver, spleen, brain, and lung at different times. Additionally, urine samples were analyzed to study the kinetics of elimination of the GNRs by this excretion route. The main achievement was clearly differentiating two kinds of behaviors. GNRs were quickly bioaccumulated by highly vascular filtration organs such as liver and spleen, while GNRs do not show a bioaccumulation rates in brain and lung for the period of time investigated. In parallel, urine also shows a lack of GNR accumulation. TXRF has proven to be a powerful, versatile, and precise analytical technique for the evaluation of GNRs content in biological systems and, in a more general way, for any kind of metallic nanoparticles.
Rapid indirect trajectory optimization on highly parallel computing architectures
NASA Astrophysics Data System (ADS)
Antony, Thomas
Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.
GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening
NASA Astrophysics Data System (ADS)
Maureira-Fredes, Cristián; Amaro-Seoane, Pau
2018-01-01
A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.
Kalb, Daniel M; Fencl, Frank A; Woods, Travis A; Swanson, August; Maestas, Gian C; Juárez, Jaime J; Edwards, Bruce S; Shreve, Andrew P; Graves, Steven W
2017-09-19
Flow cytometry provides highly sensitive multiparameter analysis of cells and particles but has been largely limited to the use of a single focused sample stream. This limits the analytical rate to ∼50K particles/s and the volumetric rate to ∼250 μL/min. Despite the analytical prowess of flow cytometry, there are applications where these rates are insufficient, such as rare cell analysis in high cellular backgrounds (e.g., circulating tumor cells and fetal cells in maternal blood), detection of cells/particles in large dilute samples (e.g., water quality, urine analysis), or high-throughput screening applications. Here we report a highly parallel acoustic flow cytometer that uses an acoustic standing wave to focus particles into 16 parallel analysis points across a 2.3 mm wide optical flow cell. A line-focused laser and wide-field collection optics are used to excite and collect the fluorescence emission of these parallel streams onto a high-speed camera for analysis. With this instrument format and fluorescent microsphere standards, we obtain analysis rates of 100K/s and flow rates of 10 mL/min, while maintaining optical performance comparable to that of a commercial flow cytometer. The results with our initial prototype instrument demonstrate that the integration of key parallelizable components, including the line-focused laser, particle focusing using multinode acoustic standing waves, and a spatially arrayed detector, can increase analytical and volumetric throughputs by orders of magnitude in a compact, simple, and cost-effective platform. Such instruments will be of great value to applications in need of high-throughput yet sensitive flow cytometry analysis.
NASA Astrophysics Data System (ADS)
Jiang, Yao; Li, Tie-Min; Wang, Li-Ping
2015-09-01
This paper investigates the stiffness modeling of compliant parallel mechanism (CPM) based on the matrix method. First, the general compliance matrix of a serial flexure chain is derived. The stiffness modeling of CPMs is next discussed in detail, considering the relative positions of the applied load and the selected displacement output point. The derived stiffness models have simple and explicit forms, and the input, output, and coupling stiffness matrices of the CPM can easily be obtained. The proposed analytical model is applied to the stiffness modeling and performance analysis of an XY parallel compliant stage with input and output decoupling characteristics. Then, the key geometrical parameters of the stage are optimized to obtain the minimum input decoupling degree. Finally, a prototype of the compliant stage is developed and its input axial stiffness, coupling characteristics, positioning resolution, and circular contouring performance are tested. The results demonstrate the excellent performance of the compliant stage and verify the effectiveness of the proposed theoretical model. The general stiffness models provided in this paper will be helpful for performance analysis, especially in determining coupling characteristics, and the structure optimization of the CPM.
Thermo-elastic wave model of the photothermal and photoacoustic signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meja, P.; Steiger, B.; Delsanto, P.P.
1996-12-31
By means of the thermo-elastic wave equation the dynamical propagation of mechanical stress and temperature can be described and applied to model the photothermal and photoacoustic signal. Analytical solutions exist only in particular cases. Using massively parallel computers it is possible to simulate the photothermal and photoacoustic signal in a most sufficient way. In this paper the method of local interaction simulation approach (LISA) is presented and selected examples of its application are given. The advantages of this method, which is particularly suitable for parallel processing, consist in reduced computation time and simple description of the photoacoustic signal in opticalmore » materials. The present contribution introduces the authors model, the formalism and some results in the 1 D case for homogeneous nonattenuative materials. The photoacoustic wave can be understood as a wave with locally limited displacement. This displacement corresponds to a temperature variation. Both variables are usually measured in photoacoustics and photothermal measurements. Therefore the temperature and displacement dependence on optical, elastic and thermal constants is analysed.« less
NASA Astrophysics Data System (ADS)
Barcos, L.; Díaz-Azpiroz, M.; Balanyá, J. C.; Expósito, I.; Jiménez-Bonilla, A.; Faccenna, C.
2016-07-01
The combination of analytical and analogue models gives new opportunities to better understand the kinematic parameters controlling the evolution of transpression zones. In this work, we carried out a set of analogue models using the kinematic parameters of transpressional deformation obtained by applying a general triclinic transpression analytical model to a tabular-shaped shear zone in the external Betic Chain (Torcal de Antequera massif). According to the results of the analytical model, we used two oblique convergence angles to reproduce the main structural and kinematic features of structural domains observed within the Torcal de Antequera massif (α = 15° for the outer domains and α = 30° for the inner domain). Two parallel inclined backstops (one fixed and the other mobile) reproduce the geometry of the shear zone walls of the natural case. Additionally, we applied digital particle image velocimetry (PIV) method to calculate the velocity field of the incremental deformation. Our results suggest that the spatial distribution of the main structures observed in the Torcal de Antequera massif reflects different modes of strain partitioning and strain localization between two domain types, which are related to the variation in the oblique convergence angle and the presence of steep planar velocity - and rheological - discontinuities (the shear zone walls in the natural case). In the 15° model, strain partitioning is simple and strain localization is high: a single narrow shear zone is developed close and parallel to the fixed backstop, bounded by strike-slip faults and internally deformed by R and P shears. In the 30° model, strain partitioning is strong, generating regularly spaced oblique-to-the backstops thrusts and strike-slip faults. At final stages of the 30° experiment, deformation affects the entire model box. Our results show that the application of analytical modelling to natural transpressive zones related to upper crustal deformation facilitates to constrain the geometrical parameters of analogue models.
Posse, Stefan
2011-01-01
The rapid development of fMRI was paralleled early on by the adaptation of MR spectroscopic imaging (MRSI) methods to quantify water relaxation changes during brain activation. This review describes the evolution of multi-echo acquisition from high-speed MRSI to multi-echo EPI and beyond. It highlights milestones in the development of multi-echo acquisition methods, such as the discovery of considerable gains in fMRI sensitivity when combining echo images, advances in quantification of the BOLD effect using analytical biophysical modeling and interleaved multi-region shimming. The review conveys the insight gained from combining fMRI and MRSI methods and concludes with recent trends in ultra-fast fMRI, which will significantly increase temporal resolution of multi-echo acquisition. PMID:22056458
The Ophidia framework: toward cloud-based data analytics for climate change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni
2015-04-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
System automatically supplies precise analytical samples of high-pressure gases
NASA Technical Reports Server (NTRS)
Langdon, W. M.
1967-01-01
High-pressure-reducing and flow-stabilization system delivers analytical gas samples from a gas supply. The system employs parallel capillary restrictors for pressure reduction and downstream throttling valves for flow control. It is used in conjunction with a sampling valve and minimizes alterations of the sampled gas.
Library Statistics of Colleges and Universities, 1963-1964. Analytic Report.
ERIC Educational Resources Information Center
Samore, Theodore
The series of analytic reports on management and salary data of the academic libraries, paralleling the series titled "Library Statistics of Colleges and Universities, Institutional Data," is continued by this publication. The statistical tables of this report are of value to administrators, librarians, and others because: (1) they help…
Several numerical and analytical solutions of the radiative transfer equation (RTE) for plane albedo were compared for solar light reflection by sea water. The study incorporated the simplest case, that being a semi-infinite one-dimensional plane-parallel absorbing and scattering...
Laser illumination of multiple capillaries that form a waveguide
Dhadwal, Harbans S.; Quesada, Mark A.; Studier, F. William
1998-08-04
A system and method are disclosed for efficient laser illumination of the interiors of multiple capillaries simultaneously, and collection of light emitted from them. Capillaries in a parallel array can form an optical waveguide wherein refraction at the cylindrical surfaces confines side-on illuminating light to the core of each successive capillary in the array. Methods are provided for determining conditions where capillaries will form a waveguide and for assessing and minimizing losses due to reflection. Light can be delivered to the arrayed capillaries through an integrated fiber optic transmitter or through a pair of such transmitters aligned coaxially at opposite sides of the array. Light emitted from materials within the capillaries can be carried to a detection system through optical fibers, each of which collects light from a single capillary, with little cross talk between the capillaries. The collection ends of the optical fibers can be in a parallel array with the same spacing as the capillary array, so that the collection fibers can all be aligned to the capillaries simultaneously. Applicability includes improving the efficiency of many analytical methods that use capillaries, including particularly high-throughput DNA sequencing and diagnostic methods based on capillary electrophoresis.
Laser illumination of multiple capillaries that form a waveguide
Dhadwal, H.S.; Quesada, M.A.; Studier, F.W.
1998-08-04
A system and method are disclosed for efficient laser illumination of the interiors of multiple capillaries simultaneously, and collection of light emitted from them. Capillaries in a parallel array can form an optical waveguide wherein refraction at the cylindrical surfaces confines side-on illuminating light to the core of each successive capillary in the array. Methods are provided for determining conditions where capillaries will form a waveguide and for assessing and minimizing losses due to reflection. Light can be delivered to the arrayed capillaries through an integrated fiber optic transmitter or through a pair of such transmitters aligned coaxially at opposite sides of the array. Light emitted from materials within the capillaries can be carried to a detection system through optical fibers, each of which collects light from a single capillary, with little cross talk between the capillaries. The collection ends of the optical fibers can be in a parallel array with the same spacing as the capillary array, so that the collection fibers can all be aligned to the capillaries simultaneously. Applicability includes improving the efficiency of many analytical methods that use capillaries, including particularly high-throughput DNA sequencing and diagnostic methods based on capillary electrophoresis. 35 figs.
Investigation of low-loss spectra and near-edge fine structure of polymers by PEELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckmann, W.
Transmission electron microscopy has changed from a purely imaging method to an analytical method. This has been facilitated particularly by equipping electron microscopes with energy filters and with parallel electron energy loss spectrometers (PEELS). Because of their relatively high energy resolution (1 to 2 eV) they provide information not only on the elements present but also on the type of bonds between the molecular groups. Polymers are radiation sensitive and the molecular bonds change as the spectrum is being recorded. This can be observed with PEEL spectrometers that are able to record spectra with high sensitivity and in rapid succession.
Yim, Sehyuk; Gultepe, Evin; Gracias, David H; Sitti, Metin
2014-02-01
This paper proposes a new wireless biopsy method where a magnetically actuated untethered soft capsule endoscope carries and releases a large number of thermo-sensitive, untethered microgrippers (μ-grippers) at a desired location inside the stomach and retrieves them after they self-fold and grab tissue samples. We describe the working principles and analytical models for the μ-gripper release and retrieval mechanisms, and evaluate the proposed biopsy method in ex vivo experiments. This hierarchical approach combining the advanced navigation skills of centimeter-scaled untethered magnetic capsule endoscopes with highly parallel, autonomous, submillimeter scale tissue sampling μ-grippers offers a multifunctional strategy for gastrointestinal capsule biopsy.
Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.
2017-12-01
We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.
Compton Scattering Cross Sections in Strong Magnetic Fields: Advances for Neutron Star Applications
NASA Astrophysics Data System (ADS)
Eiles, Matthew; Gonthier, P. L.; Baring, M. G.; Wadiasingh, Z.
2013-04-01
Various telescopes including RXTE, INTEGRAL and Suzaku have detected non-thermal X-ray emission in the 10 - 200 keV band from strongly magnetic neutron stars. Inverse Compton scattering, a quantum-electrodynamical process, is believed to be a leading candidate for the production of this intense X-ray radiation. Magnetospheric conditions are such that electrons may well possess ultra-relativistic energies, which lead to attractive simplifications of the cross section. We have recently addressed such a case by developing compact analytic expressions using correct spin-dependent widths and Sokolov & Ternov (ST) basis states, focusing specifically on ground state-to-ground state scattering. However, inverse Compton scattering can cool electrons down to mildly-relativistic energies, necessitating the development of a more general case where the incoming photons acquire nonzero incident angles relative to the field in the rest frame of the electron, and the intermediate state can be excited to arbitrary Landau levels. In this paper, we develop results pertaining to this general case using ST formalism, and treating the plethora of harmonic resonances associated with various cyclotron transitions between Landau states. Four possible scattering modes (parallel-parallel, perpendicular-perpendicular, parallel-perpendicular, and perpendicular-parallel) encapsulate the polarization dependence of the cross section. We present preliminary analytic and numerical investigations of the magnitude of the extra Landau state contributions to obtain the full cross section, and compare these new analytic developments with the spin-averaged cross sections, which we develop in parallel. Results will find application to various neutron star problems, including computation of Eddington luminosities in the magnetospheres of magnetars. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), and the NASA Astrophysics Theory and Fundamental Program.
A shipboard comparison of analytic methods for ballast water compliance monitoring
NASA Astrophysics Data System (ADS)
Bradie, Johanna; Broeg, Katja; Gianoli, Claudio; He, Jianjun; Heitmüller, Susanne; Curto, Alberto Lo; Nakata, Akiko; Rolke, Manfred; Schillak, Lothar; Stehouwer, Peter; Vanden Byllaardt, Julie; Veldhuis, Marcel; Welschmeyer, Nick; Younan, Lawrence; Zaake, André; Bailey, Sarah
2018-03-01
Promising approaches for indicative analysis of ballast water samples have been developed that require study in the field to examine their utility for determining compliance with the International Convention for the Control and Management of Ships' Ballast Water and Sediments. To address this gap, a voyage was undertaken on board the RV Meteor, sailing the North Atlantic Ocean from Mindelo (Cape Verde) to Hamburg (Germany) during June 4-15, 2015. Trials were conducted on local sea water taken up by the ship's ballast system at multiple locations along the trip, including open ocean, North Sea, and coastal water, to evaluate a number of analytic methods that measure the numeric concentration or biomass of viable organisms according to two size categories (≥ 50 μm in minimum dimension: 7 techniques, ≥ 10 μm and < 50 μm: 9 techniques). Water samples were analyzed in parallel to determine whether results were similar between methods and whether rapid, indicative methods offer comparable results to standard, time- and labor-intensive detailed methods (e.g. microscopy) and high-end scientific approaches (e.g. flow cytometry). Several promising indicative methods were identified that showed high correlation with microscopy, but allow much quicker processing and require less expert knowledge. This study is the first to concurrently use a large number of analytic tools to examine a variety of ballast water samples on board an operational ship in the field. Results are useful to identify the merits of each method and can serve as a basis for further improvement and development of tools and methodologies for ballast water compliance monitoring.
New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters
Zhang, Fan; Song, Kaijun; Fan, Yong
2017-01-01
A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514
Quantitative Characterization of Tissue Microstructure with Temporal Diffusion Spectroscopy
Xu, Junzhong; Does, Mark D.; Gore, John C.
2009-01-01
The signals recorded by diffusion-weighted magnetic resonance imaging (DWI) are dependent on the micro-structural properties of biological tissues, so it is possible to obtain quantitative structural information non-invasively from such measurements. Oscillating gradient spin echo (OGSE) methods have the ability to probe the behavior of water diffusion over different time scales and the potential to detect variations in intracellular structure. To assist in the interpretation of OGSE data, analytical expressions have been derived for diffusion-weighted signals with OGSE methods for restricted diffusion in some typical structures, including parallel planes, cylinders and spheres, using the theory of temporal diffusion spectroscopy. These analytical predictions have been confirmed with computer simulations. These expressions suggest how OGSE signals from biological tissues should be analyzed to characterize tissue microstructure, including how to estimate cell nuclear sizes. This approach provides a model to interpret diffusion data obtained from OGSE measurements that can be used for applications such as monitoring tumor response to treatment in vivo. PMID:19616979
NASA Astrophysics Data System (ADS)
Kulikov, G. M.; Plotnikova, S. V.
2017-03-01
The possibility of using the method of sampling surfaces (SaS) for solving the free vibration problem of threedimensional elasticity for metal-ceramic shells is studied. According to this method, in the shell body, an arbitrary number of SaS parallel to its middle surface are selected in order to take displacements of these surfaces as unknowns. The SaS pass through the nodes of a Chebyshev polynomial, which improves the convergence of the SaS method significantly. As a result, the SaS method can be used to obtain analytical solutions of the vibration problem for metal-ceramic plates and cylindrical shells that asymptotically approach the exact solutions of elasticity as the number of SaS tends to infinity.
NASA Technical Reports Server (NTRS)
Radloff, H. D., II; Hyer, M. W.; Nemeth, M. P.
1994-01-01
The focus of this work is the buckling response of symmetrically laminated composite plates having a planform area in the shape of an isosceles trapezoid. The loading is assumed to be inplane and applied perpendicular to the parallel ends of the plate. The tapered edges of the plate are assumed to have simply supported boundary conditions, while the parallel ends are assumed to have either simply supported or clamped boundary conditions. A semi-analytic closed-form solution based on energy principles and the Trefftz stability criterion is derived and solutions are obtained using the Rayleigh-Ritz method. Intrinsic in this solution is a simplified prebuckling analysis which approximates the inplane force resultant distributions by the forms Nx=P/W(x) and Ny=Nxy=0, where P is the applied load and W(x) is the plate width which, for the trapezoidal planform, varies linearly with the lengthwise coordinate x. The out-of-plane displacement is approximated by a double trigonometric series. This analysis is posed in terms of four nondimensional parameters representing orthotropic and anisotropic material properties, and two nondimensional parameters representing geometric properties. For comparison purposes, a number of specific plate geometry, ply orientation, and stacking sequence combinations are investigated using the general purpose finite element code ABAQUS. Comparison of buckling coefficients calculated using the semi-analytical model and the finite element model show agreement within 5 percent, in general, and within 15 percent for the worst cases. In order to verify both the finite element and semi-analytical analyses, buckling loads are measured for graphite/epoxy plates having a wide range of plate geometries and stacking sequences. Test fixtures, instrumentation system, and experimental technique are described. Experimental results for the buckling load, the buckled mode shape, and the prebuckling plate stiffness are presented and show good agreement with the analytical results regarding the buckling load and the prebuckling plate stiffness. However, the experimental results show that for some cases the analysis underpredicts the number of halfwaves in the buckled mode shape. In the context of the definitions of taper ratio and aspect ratio used in this study, it is concluded that the buckling load always increases as taper ratio increases for a given aspect ratio for plates having simply supported boundary conditions on the parallel ends. There are combinations of plate geometry and ply stackling sequences, however, that reverse this trend for plates having clamped boundary conditions on the parallel ends such that an increase in the taper ratio causes a decrease in the buckling load. The clamped boundary conditions on the parallel ends of the plate are shown to increase the buckling load compared to simply supported boundary conditions. Also, anisotropy (the D16 and D26 terms) is shown to decrease the buckling load and skew the buckled mode shape for both the simply supported and clamped boundary conditions.
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng
2014-04-01
Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Olano, C. A.
2009-11-01
Context: Using certain simplifications, Kompaneets derived a partial differential equation that states the local geometrical and kinematical conditions that each surface element of a shock wave, created by a point blast in a stratified gaseous medium, must satisfy. Kompaneets could solve his equation analytically for the case of a wave propagating in an exponentially stratified medium, obtaining the form of the shock front at progressive evolutionary stages. Complete analytical solutions of the Kompaneets equation for shock wave motion in further plane-parallel stratified media were not found, except for radially stratified media. Aims: We aim to analytically solve the Kompaneets equation for the motion of a shock wave in different plane-parallel stratified media that can reflect a wide variety of astrophysical contexts. We were particularly interested in solving the Kompaneets equation for a strong explosion in the interstellar medium of the Galactic disk, in which, due to intense winds and explosions of stars, gigantic gaseous structures known as superbubbles and supershells are formed. Methods: Using the Kompaneets approximation, we derived a pair of equations that we call adapted Kompaneets equations, that govern the propagation of a shock wave in a stratified medium and that permit us to obtain solutions in parametric form. The solutions provided by the system of adapted Kompaneets equations are equivalent to those of the Kompaneets equation. We solved the adapted Kompaneets equations for shock wave propagation in a generic stratified medium by means of a power-series method. Results: Using the series solution for a shock wave in a generic medium, we obtained the series solutions for four specific media whose respective density distributions in the direction perpendicular to the stratification plane are of an exponential, power-law type (one with exponent k=-1 and the other with k =-2) and a quadratic hyperbolic-secant. From these series solutions, we deduced exact solutions for the four media in terms of elemental functions. The exact solution for shock wave propagation in a medium of quadratic hyperbolic-secant density distribution is very appropriate to describe the growth of superbubbles in the Galactic disk. Member of the Carrera del Investigador Científico del CONICET, Argentina.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
NASA Astrophysics Data System (ADS)
Vasko, I.; Agapitov, O. V.; Mozer, F.; Bonnell, J. W.; Krasnoselskikh, V.; Artemyev, A.; Drake, J. F.
2017-12-01
Chorus waves observed in the Earth inner magnetosphere sometimes exhibit significantly distorted (nonharmonic) parallel electric field waveform. In spectrograms these waveform features show up as overtones of chorus wave. In this work we show that the chorus wave parallel electric field is distorted due to finite temperature of electrons. The distortion of the parallel electric field is described analytically and reproduced in the numerical fluid simulations. Due to this effect the chorus energy is transferred to higher frequencies making possible efficient scattering of low ( a few keV) energy electrons.
Astley, Victoria; Reichel, Kimberly S; Jones, Jonathan; Mendis, Rajind; Mittleman, Daniel M
2012-09-10
We use the mode-matching technique to study parallel-plate waveguide resonant cavities that are filled with a dielectric. We apply the generalized scattering matrix theory to calculate the power transmission through the waveguide-cavities. We compare the analytical results to experimental data to confirm the validity of this approach.
Della Pelle, Flavio; Compagnone, Dario
2018-02-04
Polyphenolic compounds (PCs) have received exceptional attention at the end of the past millennium and as much at the beginning of the new one. Undoubtedly, these compounds in foodstuffs provide added value for their well-known health benefits, for their technological role and also marketing. Many efforts have been made to provide simple, effective and user friendly analytical methods for the determination and antioxidant capacity (AOC) evaluation of food polyphenols. In a parallel track, over the last twenty years, nanomaterials (NMs) have made their entry in the analytical chemistry domain; NMs have, in fact, opened new paths for the development of analytical methods with the common aim to improve analytical performance and sustainability, becoming new tools in quality assurance of food and beverages. The aim of this review is to provide information on the most recent developments of new NMs-based tools and strategies for total polyphenols (TP) determination and AOC evaluation in food. In this review optical, electrochemical and bioelectrochemical approaches have been reviewed. The use of nanoparticles, quantum dots, carbon nanomaterials and hybrid materials for the detection of polyphenols is the main subject of the works reported. However, particular attention has been paid to the success of the application in real samples, in addition to the NMs. In particular, the discussion has been focused on methods/devices presenting, in the opinion of the authors, clear advancement in the fields, in terms of simplicity, rapidity and usability. This review aims to demonstrate how the NM-based approaches represent valid alternatives to classical methods for polyphenols analysis, and are mature to be integrated for the rapid quality assessment of food quality in lab or directly in the field.
2018-01-01
Polyphenolic compounds (PCs) have received exceptional attention at the end of the past millennium and as much at the beginning of the new one. Undoubtedly, these compounds in foodstuffs provide added value for their well-known health benefits, for their technological role and also marketing. Many efforts have been made to provide simple, effective and user friendly analytical methods for the determination and antioxidant capacity (AOC) evaluation of food polyphenols. In a parallel track, over the last twenty years, nanomaterials (NMs) have made their entry in the analytical chemistry domain; NMs have, in fact, opened new paths for the development of analytical methods with the common aim to improve analytical performance and sustainability, becoming new tools in quality assurance of food and beverages. The aim of this review is to provide information on the most recent developments of new NMs-based tools and strategies for total polyphenols (TP) determination and AOC evaluation in food. In this review optical, electrochemical and bioelectrochemical approaches have been reviewed. The use of nanoparticles, quantum dots, carbon nanomaterials and hybrid materials for the detection of polyphenols is the main subject of the works reported. However, particular attention has been paid to the success of the application in real samples, in addition to the NMs. In particular, the discussion has been focused on methods/devices presenting, in the opinion of the authors, clear advancement in the fields, in terms of simplicity, rapidity and usability. This review aims to demonstrate how the NM-based approaches represent valid alternatives to classical methods for polyphenols analysis, and are mature to be integrated for the rapid quality assessment of food quality in lab or directly in the field. PMID:29401719
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong
2013-08-01
Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.
Hot-spot investigations of utility scale panel configurations
NASA Technical Reports Server (NTRS)
Arnett, J. C.; Dally, R. B.; Rumburg, J. P.
1984-01-01
The causes of array faults and efforts to mitigate their effects are examined. Research is concentrated on the panel for the 900 kw second phase of the Sacramento Municipal Utility District (SMUD) project. The panel is designed for hot spot tolerance without comprising efficiency under normal operating conditions. Series/paralleling internal to each module improves tolerance in the power quadrant to cell short or open circuits. Analtyical methods are developed for predicting worst case shade patterns and calculating the resultant cell temperature. Experiments conducted on a prototype panel support the analytical calculations.
Measuring salivary analytes from free-ranging monkeys
Higham, James P.; Vitale, Alison; Rivera, Adaris Mas; Ayala, James E.; Maestripieri, Dario
2014-01-01
Studies of large free-ranging mammals have been revolutionized by non-invasive methods for assessing physiology, which usually involve the measurement of fecal or urinary biomarkers. However, such techniques are limited by numerous factors. To expand the range of physiological variables measurable non-invasively from free-ranging primates, we developed techniques for sampling monkey saliva by offering monkeys ropes with oral swabs sewn on the ends. We evaluated different attractants for encouraging individuals to offer samples, and proportions of individuals in different age/sex categories willing to give samples. We tested the saliva samples we obtained in three commercially available assays: cortisol, Salivary Alpha Amylase, and Secretory Immunoglobulin A. We show that habituated free-ranging rhesus macaques will give saliva samples voluntarily without training, with 100% of infants, and over 50% of adults willing to chew on collection devices. Our field methods are robust even for analytes that show poor recovery from cotton, and/or that have concentrations dependent on salivary flow rate. We validated the cortisol and SAA assays for use in rhesus macaques by showing aspects of analytical validation, such as that samples dilute linearly and in parallel to assay standards. We also found that values measured correlated with biologically meaningful characteristics of sampled individuals (age and dominance rank). The SIgA assay tested did not react to samples. Given the wide range of analytes measurable in saliva but not in feces or urine, our methods considerably improve our ability to study physiological aspects of the behavior and ecology of free-ranging primates, and are also potentially adaptable to other mammalian taxa. PMID:20837036
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Micro-separation toward systems biology.
Liu, Bi-Feng; Xu, Bo; Zhang, Guisen; Du, Wei; Luo, Qingming
2006-02-17
Current biology is experiencing transformation in logic or philosophy that forces us to reevaluate the concept of cell, tissue or entire organism as a collection of individual components. Systems biology that aims at understanding biological system at the systems level is an emerging research area, which involves interdisciplinary collaborations of life sciences, computational and mathematical sciences, systems engineering, and analytical technology, etc. For analytical chemistry, developing innovative methods to meet the requirement of systems biology represents new challenges as also opportunities and responsibility. In this review, systems biology-oriented micro-separation technologies are introduced for comprehensive profiling of genome, proteome and metabolome, characterization of biomolecules interaction and single cell analysis such as capillary electrophoresis, ultra-thin layer gel electrophoresis, micro-column liquid chromatography, and their multidimensional combinations, parallel integrations, microfabricated formats, and nano technology involvement. Future challenges and directions are also suggested.
Discharge reliability in ablative pulsed plasma thrusters
NASA Astrophysics Data System (ADS)
Wu, Zhiwen; Sun, Guorui; Yuan, Shiyue; Huang, Tiankun; Liu, Xiangyang; Xie, Kan; Wang, Ningfei
2017-08-01
Discharge reliability is typically neglected in low-ignition-cycle ablative pulsed plasma thrusters (APPTs). In this study, the discharge reliability of an APPT is assessed analytically and experimentally. The goals of this study are to better understand the ignition characteristics and to assess the accuracy of the analytical method. For each of six sets of operating conditions, 500 tests of a parallel-plate APPT with a coaxial semiconductor spark plug are conducted. The discharge voltage and current are measured with a high-voltage probe and a Rogowski coil, respectively, to determine whether the discharge is successful. Generally, the discharge success rate increases as the discharge voltage increases, and it decreases as the electrode gap and the number of ignitions increases. The theoretical analysis and the experimental results are reasonably consistent. This approach provides a reference for designing APPTs and improving their stability.
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
NASA Astrophysics Data System (ADS)
Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid
2018-06-01
The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro-thermo-hydrodynamic stability of the nanofluidic system are discussed in more detail through graphical and tabular illustrations.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.
Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).
Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2010-07-01
Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.
Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio
2013-07-20
Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.
Computational efficiency of parallel combinatorial OR-tree searches
NASA Technical Reports Server (NTRS)
Li, Guo-Jie; Wah, Benjamin W.
1990-01-01
The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
Yim, Sehyuk; Gultepe, Evin; Gracias, David H.
2014-01-01
This paper proposes a new wireless biopsy method where a magnetically actuated untethered soft capsule endoscope carries and releases a large number of thermo-sensitive, untethered microgrippers (μ-grippers) at a desired location inside the stomach and retrieves them after they self-fold and grab tissue samples. We describe the working principles and analytical models for the μ-gripper release and retrieval mechanisms, and evaluate the proposed biopsy method in ex vivo experiments. This hierarchical approach combining the advanced navigation skills of centimeter-scaled untethered magnetic capsule endoscopes with highly parallel, autonomous, submillimeter scale tissue sampling μ-grippers offers a multifunctional strategy for gastrointestinal capsule biopsy. PMID:24108454
Rodriguez-Mozaz, Sara; de Alda, Maria J López; Barceló, Damià
2006-04-15
This work describes the application of an optical biosensor (RIver ANALyser, RIANA) to the simultaneous analysis of three relevant environmental organic pollutants, namely, the pesticides atrazine and isoproturon and the estrogen estrone, in real water samples. This biosensor is based on an indirect inhibition immunoassay which takes place at a chemically modified optical transducer chip. The spatially resolved modification of the transducer surface allows the simultaneous determination of selected target analytes by means of "total internal reflection fluorescence" (TIRF). The performance of the immunosensor method developed was evaluated against a well accepted traditional method based on solid-phase extraction followed by liquid chromatography-mass spectrometry (LC-MS). The chromatographic method was superior in terms of linearity, sensitivity and accuracy, and the biosensor method in terms of repeatability, speed, cost and automation. The application of both methods in parallel to determine the occurrence and removal of atrazine, isoproturon and estrone throughout the treatment process (sand filtration, ozonation, activated carbon filtration and chlorination) in a waterworks showed an overestimation of results in the case of the biosensor, which was partially attributed to matrix and cross-reactivity effects, in spite of the addition of ovalbumin to the sample to minimize matrix interferences. Based on the comparative performance of both techniques, the biosensor emerges as a suitable tool for fast, simple and automated screening of water pollutants without sample pretreatment. To the author's knowledge, this is the first description of the application of the biosensor RIANA in the multi-analyte configuration to the regular monitoring of pollutants in a waterworks.
Ask, Kristine Skoglund; Bardakci, Turgay; Parmer, Marthe Petrine; Halvorsen, Trine Grønhaug; Øiestad, Elisabeth Leere; Pedersen-Bjergaard, Stig; Gjelstad, Astrid
2016-09-10
Generic Parallel Artificial Liquid Membrane Extraction (PALME) methods for non-polar basic and non-polar acidic drugs from human plasma were investigated with respect to phospholipid removal. In both cases, extractions in 96-well format were performed from plasma (125μL), through 4μL organic solvent used as supported liquid membranes (SLMs), and into 50μL aqueous acceptor solutions. The acceptor solutions were subsequently analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) using in-source fragmentation and monitoring the m/z 184→184 transition for investigation of phosphatidylcholines (PC), sphingomyelins (SM), and lysophosphatidylcholines (Lyso-PC). In both generic methods, no phospholipids were detected in the acceptor solutions. Thus, PALME appeared to be highly efficient for phospholipid removal. To further support this, qualitative (post-column infusion) and quantitative matrix effects were investigated with fluoxetine, fluvoxamine, and quetiapine as model analytes. No signs of matrix effects were observed. Finally, PALME was evaluated for the aforementioned drug substances, and data were in accordance with European Medicines Agency (EMA) guidelines. Copyright © 2016 Elsevier B.V. All rights reserved.
Nelson, Kjell E.; Foley, Jennifer O.; Yager, Paul
2008-01-01
We describe a novel microfluidic immunoassay method based on the diffusion of a small molecule analyte into a parallel-flowing stream containing cognate antibody. This interdiffusion results in a steady-state gradient of antibody binding site occupancy transverse to convective flow. In contrast to the diffusion immunoassay (Hatch et al. Nature Biotechnology,19:461−465 (2001)), this antibody occupancy gradient is interrogated by a sensor surface coated with a functional analog of the analyte. Antibodies with at least one unoccupied binding site may specifically bind to this functionalized surface, leading to a quantifiable change in surface coverage by the antibody. SPR imaging is used to probe the spatial distribution of antibody binding to the surface and, therefore, the outcome of the assay. We show that the pattern of antibody binding to the SPR sensing surface correlates with the concentration of a model analyte (phenytoin) in the sample stream. Using an inexpensive disposable microfluidic device, we demonstrate assays for phenytoin ranging in concentration from 75 to 1000 nM in phosphate buffer. At a total volumetric flow rate of 90 nL/sec, the assays are complete within 10 minutes. Inclusion of an additional flow stream on the side of the antibody stream opposite to that of the sample enables simultaneous calibration of the assay. This assay method is suitable for rapid quantitative detection of low-molecular weight analytes for point-of-care diagnostic instrumentation. PMID:17437332
Analytical Study on Thermal and Mechanical Design of Printed Circuit Heat Exchanger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo
2013-09-01
The analytical methodologies for the thermal design, mechanical design and cost estimation of printed circuit heat exchanger are presented in this study. In this study, three flow arrangements of parallel flow, countercurrent flow and crossflow are taken into account. For each flow arrangement, the analytical solution of temperature profile of heat exchanger is introduced. The size and cost of printed circuit heat exchangers for advanced small modular reactors, which employ various coolants such as sodium, molten salts, helium, and water, are also presented.
INSTABILITIES DRIVEN BY THE DRIFT AND TEMPERATURE ANISOTROPY OF ALPHA PARTICLES IN THE SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verscharen, Daniel; Bourouaine, Sofiane; Chandran, Benjamin D. G., E-mail: daniel.verscharen@unh.edu, E-mail: s.bourouaine@unh.edu, E-mail: benjamin.chandran@unh.edu
2013-08-20
We investigate the conditions under which parallel-propagating Alfven/ion-cyclotron (A/IC) waves and fast-magnetosonic/whistler (FM/W) waves are driven unstable by the differential flow and temperature anisotropy of alpha particles in the solar wind. We focus on the limit in which w{sub Parallel-To {alpha}} {approx}> 0.25v{sub A}, where w{sub Parallel-To {alpha}} is the parallel alpha-particle thermal speed and v{sub A} is the Alfven speed. We derive analytic expressions for the instability thresholds of these waves, which show, e.g., how the minimum unstable alpha-particle beam speed depends upon w{sub Parallel-To {alpha}}/v{sub A}, the degree of alpha-particle temperature anisotropy, and the alpha-to-proton temperature ratio. Wemore » validate our analytical results using numerical solutions to the full hot-plasma dispersion relation. Consistent with previous work, we find that temperature anisotropy allows A/IC waves and FM/W waves to become unstable at significantly lower values of the alpha-particle beam speed U{sub {alpha}} than in the isotropic-temperature case. Likewise, differential flow lowers the minimum temperature anisotropy needed to excite A/IC or FM/W waves relative to the case in which U{sub {alpha}} = 0. We discuss the relevance of our results to alpha particles in the solar wind near 1 AU.« less
Improving Data Transfer Throughput with Direct Search Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar
2016-01-01
Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Zhu, Xinjie; Zhang, Qiang; Ho, Eric Dun; Yu, Ken Hung-On; Liu, Chris; Huang, Tim H; Cheng, Alfred Sze-Lok; Kao, Ben; Lo, Eric; Yip, Kevin Y
2017-09-22
A genomic signal track is a set of genomic intervals associated with values of various types, such as measurements from high-throughput experiments. Analysis of signal tracks requires complex computational methods, which often make the analysts focus too much on the detailed computational steps rather than on their biological questions. Here we propose Signal Track Query Language (STQL) for simple analysis of signal tracks. It is a Structured Query Language (SQL)-like declarative language, which means one only specifies what computations need to be done but not how these computations are to be carried out. STQL provides a rich set of constructs for manipulating genomic intervals and their values. To run STQL queries, we have developed the Signal Track Analytical Research Tool (START, http://yiplab.cse.cuhk.edu.hk/start/ ), a system that includes a Web-based user interface and a back-end execution system. The user interface helps users select data from our database of around 10,000 commonly-used public signal tracks, manage their own tracks, and construct, store and share STQL queries. The back-end system automatically translates STQL queries into optimized low-level programs and runs them on a computer cluster in parallel. We use STQL to perform 14 representative analytical tasks. By repeating these analyses using bedtools, Galaxy and custom Python scripts, we show that the STQL solution is usually the simplest, and the parallel execution achieves significant speed-up with large data files. Finally, we describe how a biologist with minimal formal training in computer programming self-learned STQL to analyze DNA methylation data we produced from 60 pairs of hepatocellular carcinoma (HCC) samples. Overall, STQL and START provide a generic way for analyzing a large number of genomic signal tracks in parallel easily.
Instability of cooperative adaptive cruise control traffic flow: A macroscopic approach
NASA Astrophysics Data System (ADS)
Ngoduy, D.
2013-10-01
This paper proposes a macroscopic model to describe the operations of cooperative adaptive cruise control (CACC) traffic flow, which is an extension of adaptive cruise control (ACC) traffic flow. In CACC traffic flow a vehicle can exchange information with many preceding vehicles through wireless communication. Due to such communication the CACC vehicle can follow its leader at a closer distance than the ACC vehicle. The stability diagrams are constructed from the developed model based on the linear and nonlinear stability method for a certain model parameter set. It is found analytically that CACC vehicles enhance the stabilization of traffic flow with respect to both small and large perturbations compared to ACC vehicles. Numerical simulation is carried out to support our analytical findings. Based on the nonlinear stability analysis, we will show analytically and numerically that the CACC system better improves the dynamic equilibrium capacity over the ACC system. We have argued that in parallel to microscopic models for CACC traffic flow, the newly developed macroscopic will provide a complete insight into the dynamics of intelligent traffic flow.
Big-BOE: Fusing Spanish Official Gazette with Big Data Technology.
Basanta-Val, Pablo; Sánchez-Fernández, Luis
2018-06-01
The proliferation of new data sources, stemmed from the adoption of open-data schemes, in combination with an increasing computing capacity causes the inception of new type of analytics that process Internet of things with low-cost engines to speed up data processing using parallel computing. In this context, the article presents an initiative, called BIG-Boletín Oficial del Estado (BOE), designed to process the Spanish official government gazette (BOE) with state-of-the-art processing engines, to reduce computation time and to offer additional speed up for big data analysts. The goal of including a big data infrastructure is to be able to process different BOE documents in parallel with specific analytics, to search for several issues in different documents. The application infrastructure processing engine is described from an architectural perspective and from performance, showing evidence on how this type of infrastructure improves the performance of different types of simple analytics as several machines cooperate.
Damped transverse oscillations of interacting coronal loops
NASA Astrophysics Data System (ADS)
Soler, Roberto; Luna, Manuel
2015-10-01
Damped transverse oscillations of magnetic loops are routinely observed in the solar corona. This phenomenon is interpreted as standing kink magnetohydrodynamic waves, which are damped by resonant absorption owing to plasma inhomogeneity across the magnetic field. The periods and damping times of these oscillations can be used to probe the physical conditions of the coronal medium. Some observations suggest that interaction between neighboring oscillating loops in an active region may be important and can modify the properties of the oscillations. Here we theoretically investigate resonantly damped transverse oscillations of interacting nonuniform coronal loops. We provide a semi-analytic method, based on the T-matrix theory of scattering, to compute the frequencies and damping rates of collective oscillations of an arbitrary configuration of parallel cylindrical loops. The effect of resonant damping is included in the T-matrix scheme in the thin boundary approximation. Analytic and numerical results in the specific case of two interacting loops are given as an application.
Analytical Characterization of Erythritol Tetranitrate, an Improvised Explosive.
Matyáš, Robert; Lyčka, Antonín; Jirásko, Robert; Jakový, Zdeněk; Maixner, Jaroslav; Mišková, Linda; Künzel, Martin
2016-05-01
Erythritol tetranitrate (ETN), an ester of nitric acid and erythritol, is a solid crystalline explosive with high explosive performance. Although it has never been used in any industrial or military application, it has become one of the most prepared and misused improvise explosives. In this study, several analytical techniques were explored to facilitate analysis in forensic laboratories. FTIR and Raman spectrometry measurements expand existing data and bring more detailed assignment of bands through the parallel study of erythritol [(15) N4 ] tetranitrate. In the case of powder diffraction, recently published data were verified, and (1) H, (13) C, and (15) N NMR spectra are discussed in detail. The technique of electrospray ionization tandem mass spectrometry was successfully used for the analysis of ETN. Described methods allow fast, versatile, and reliable detection or analysis of samples containing erythritol tetranitrate in forensic laboratories. © 2016 American Academy of Forensic Sciences.
Tavčar, Gregor; Katrašnik, Tomaž
2014-01-01
The parallel straight channel PEM fuel cell model presented in this paper extends the innovative hybrid 3D analytic-numerical (HAN) approach previously published by the authors with capabilities to address ternary diffusion systems and counter-flow configurations. The model's core principle is modelling species transport by obtaining a 2D analytic solution for species concentration distribution in the plane perpendicular to the cannel gas-flow and coupling consecutive 2D solutions by means of a 1D numerical pipe-flow model. Electrochemical and other nonlinear phenomena are coupled to the species transport by a routine that uses derivative approximation with prediction-iteration. The latter is also the core of the counter-flow computation algorithm. A HAN model of a laboratory test fuel cell is presented and evaluated against a professional 3D CFD simulation tool showing very good agreement between results of the presented model and those of the CFD simulation. Furthermore, high accuracy results are achieved at moderate computational times, which is owed to the semi-analytic nature and to the efficient computational coupling of electrochemical kinetics and species transport.
Khani, Rouhollah; Ghasemi, Jahan B; Shemirani, Farzaneh
2014-10-01
This research reports the first application of β-cyclodextrin (β-CD) complexes as a new method for generation of three way data, combined with second-order calibration methods for quantification of a binary mixture of caffeic (CA) and vanillic (VA) acids, as model compounds in fruit juices samples. At first, the basic experimental parameters affecting the formation of inclusion complexes between target analytes and β-CD were investigated and optimized. Then under the optimum conditions, parallel factor analysis (PARAFAC) and bilinear least squares/residual bilinearization (BLLS/RBL) were applied for deconvolution of trilinear data to get spectral and concentration profiles of CA and VA as a function of β-CD concentrations. Due to severe concentration profile overlapping between CA and VA in β-CD concentration dimension, PARAFAC could not be successfully applied to the studied samples. So, BLLS/RBL performed better than PARAFAC. The resolution of the model compounds was possible due to differences in the spectral absorbance changes of the β-CD complexes signals of the investigated analytes, opening a new approach for second-order data generation. The proposed method was validated by comparison with a reference method based on high-performance liquid chromatography photodiode array detection (HPLC-PDA), and no significant differences were found between the reference values and the ones obtained with the proposed method. Such a chemometrics-based protocol may be a very promising tool for more analytical applications in real samples monitoring, due to its advantages of simplicity, rapidity, accuracy, sufficient spectral resolution and concentration prediction even in the presence of unknown interferents. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mieles, John; Zhan, Hongbin
2012-06-01
The permeable reactive barrier (PRB) remediation technology has proven to be more cost-effective than conventional pump-and-treat systems, and has demonstrated the ability to rapidly reduce the concentrations of specific chemicals of concern (COCs) by up to several orders of magnitude in some scenarios. This study derives new steady-state analytical solutions to multispecies reactive transport in a PRB-aquifer (dual domain) system. The advantage of the dual domain model is that it can account for the potential existence of natural degradation in the aquifer, when designing the required PRB thickness. The study focuses primarily on the steady-state analytical solutions of the tetrachloroethene (PCE) serial degradation pathway and secondly on the analytical solutions of the parallel degradation pathway. The solutions in this study can also be applied to other types of dual domain systems with distinct flow and transport properties. The steady-state analytical solutions are shown to be accurate and the numerical program RT3D is selected for comparison. The results of this study are novel in that the solutions provide improved modeling flexibility including: 1) every species can have unique first-order reaction rates and unique retardation factors, and 2) daughter species can be modeled with their individual input concentrations or solely as byproducts of the parent species. The steady-state analytical solutions exhibit a limitation that occurs when interspecies reaction rate factors equal each other, which result in undefined solutions. Excel spreadsheet programs were created to facilitate prompt application of the steady-state analytical solutions, for both the serial and parallel degradation pathways.
An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells
Diot, Steven; François, Marianne M.
2015-10-22
In this study, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specificmore » prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. Finally, the proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.« less
On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions
NASA Astrophysics Data System (ADS)
Gonzalo, J.; Domínguez, D.; López, D.
2014-12-01
From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.
Study of phase clustering method for analyzing large volumes of meteorological observation data
NASA Astrophysics Data System (ADS)
Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.
2017-11-01
The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.
Automatic high-throughput screening of colloidal crystals using machine learning
NASA Astrophysics Data System (ADS)
Spellings, Matthew; Glotzer, Sharon C.
Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.
Data informatics for the Detection, Characterization, and Attribution of Climate Extremes
NASA Astrophysics Data System (ADS)
Collins, W.; Wehner, M. F.; O'Brien, T. A.; Paciorek, C. J.; Krishnan, H.; Johnson, J. N.; Prabhat, M.
2015-12-01
The potential for increasing frequency and intensity of extremephenomena including downpours, heat waves, and tropical cyclonesconstitutes one of the primary risks of climate change for society andthe environment. The challenge of characterizing these risks is thatextremes represent the "tails" of distributions of atmosphericphenomena and are, by definition, highly localized and typicallyrelatively transient. Therefore very large volumes of observationaldata and projections of future climate are required to quantify theirproperties in a robust manner. Massive data analytics are required inorder to detect individual extremes, accumulate statistics on theirproperties, quantify how these statistics are changing with time, andattribute the effects of anthropogenic global warming on thesestatistics. We describe examples of the suite of techniques the climate communityis developing to address these analytical challenges. The techniquesinclude massively parallel methods for detecting and trackingatmospheric rivers and cyclones; data-intensive extensions togeneralized extreme value theory to summarize the properties ofextremes; and multi-model ensembles of hindcasts to quantify theattributable risk of anthropogenic influence on individual extremes.We conclude by highlighting examples of these methods developed by ourCASCADE (Calibrated and Systematic Characterization, Attribution, andDetection of Extremes) project.
Bond order potential module for LAMMPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-11
pair_bop is a module for performing energy calculations using the Bond Order Potential (BOP) for use in the parallel molecular dynamics code LAMMPS. The bop pair style computes BOP based upon quantum mechanical incorporating both sigma and pi bondings. By analytically deriving the BOP pair bop from quantum mechanical theory its transferability to different phases can approach that of quantum mechanical methods. This potential is extremely effective at modeling 111-V and II-VI compounds such as GaAs and CdTe. This potential is similar to the original BOP developed by Pettifor and later updated by Murdock et al. and Ward et al.
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Modeling Sound Propagation Through Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Leib, Stewart J.
2014-01-01
A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.
Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen
2016-09-02
Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
NASA Astrophysics Data System (ADS)
Bin-Mohsin, Bandar; Ahmed, Naveed; Adnan; Khan, Umar; Tauseef Mohyud-Din, Syed
2017-04-01
This article deals with the bioconvection flow in a parallel-plate channel. The plates are parallel and the flowing fluid is saturated with nanoparticles, and water is considered as a base fluid because microorganisms can survive only in water. A highly nonlinear and coupled system of partial differential equations presenting the model of bioconvection flow between parallel plates is reduced to a nonlinear and coupled system (nondimensional bioconvection flow model) of ordinary differential equations with the help of feasible nondimensional variables. In order to find the convergent solution of the system, a semi-analytical technique is utilized called variation of parameters method (VPM). Numerical solution is also computed and the Runge-Kutta scheme of fourth order is employed for this purpose. Comparison between these solutions has been made on the domain of interest and found to be in excellent agreement. Also, influence of various parameters has been discussed for the nondimensional velocity, temperature, concentration and density of the motile microorganisms both for suction and injection cases. Almost inconsequential influence of thermophoretic and Brownian motion parameters on the temperature field is observed. An interesting variation are inspected for the density of the motile microorganisms due to the varying bioconvection parameter in suction and injection cases. At the end, we make some concluding remarks in the light of this article.
MAGNETIC BRAIDING AND PARALLEL ELECTRIC FIELDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilmot-Smith, A. L.; Hornig, G.; Pontin, D. I.
2009-05-10
The braiding of the solar coronal magnetic field via photospheric motions-with subsequent relaxation and magnetic reconnection-is one of the most widely debated ideas of solar physics. We readdress the theory in light of developments in three-dimensional magnetic reconnection theory. It is known that the integrated parallel electric field along field lines is the key quantity determining the rate of reconnection, in contrast with the two-dimensional case where the electric field itself is the important quantity. We demonstrate that this difference becomes crucial for sufficiently complex magnetic field structures. A numerical method is used to relax a braided magnetic field towardmore » an ideal force-free equilibrium; the field is found to remain smooth throughout the relaxation, with only large-scale current structures. However, a highly filamentary integrated parallel current structure with extremely short length-scales is found in the field, with the associated gradients intensifying during the relaxation process. An analytical model is developed to show that, in a coronal situation, the length scales associated with the integrated parallel current structures will rapidly decrease with increasing complexity, or degree of braiding, of the magnetic field. Analysis shows the decrease in these length scales will, for any finite resistivity, eventually become inconsistent with the stability of the coronal field. Thus the inevitable consequence of the magnetic braiding process is a loss of equilibrium of the magnetic field, probably via magnetic reconnection events.« less
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images
Wang, Yangping; Wang, Song
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
NASA Astrophysics Data System (ADS)
Ouyang, Lizhi
A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was that the Co-OH bond was weak. This, together with the ongoing projects studying different Vitamin B12 derivatives, might help us to answer questions about the Co-C cleavage of the B12 coenzyme, which is involved in many important B12 enzymatic reactions.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
Implementation of parallel moment equations in NIMROD
NASA Astrophysics Data System (ADS)
Lee, Hankyu Q.; Held, Eric D.; Ji, Jeong-Young
2017-10-01
As collisionality is low (the Knudsen number is large) in many plasma applications, kinetic effects become important, particularly in parallel dynamics for magnetized plasmas. Fluid models can capture some kinetic effects when integral parallel closures are adopted. The adiabatic and linear approximations are used in solving general moment equations to obtain the integral closures. In this work, we present an effort to incorporate non-adiabatic (time-dependent) and nonlinear effects into parallel closures. Instead of analytically solving the approximate moment system, we implement exact parallel moment equations in the NIMROD fluid code. The moment code is expected to provide a natural convergence scheme by increasing the number of moments. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.
A big data approach for climate change indicators processing in the CLIP-C project
NASA Astrophysics Data System (ADS)
D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni
2016-04-01
Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.
Integrating the Apache Big Data Stack with HPC for Big Data
NASA Astrophysics Data System (ADS)
Fox, G. C.; Qiu, J.; Jha, S.
2014-12-01
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.
Mezzullo, Marco; Fazzini, Alessia; Gambineri, Alessandra; Di Dalmazi, Guido; Mazza, Roberta; Pelusi, Carla; Vicennati, Valentina; Pasquali, Renato; Pagotto, Uberto; Fanelli, Flaminia
2017-08-28
Salivary androgen testing represents a valuable source of biological information. However, the proper measurement of such low levels is challenging for direct immunoassays, lacking adequate accuracy. In the last few years, many conflicting findings reporting low correlation with the serum counterparts have hampered the clinical application of salivary androgen testing. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) makes it possible to overcome previous analytical limits, providing new insights in endocrinology practice. Salivary testosterone (T), androstenedione (A), dehydroepiandrosterone (DHEA) and 17OHprogesterone (17OHP) were extracted from 500µL of saliva, separated in 9.5 min LC-gradient and detected by positive electrospray ionization - multiple reaction monitoring. The diurnal variation of salivary and serum androgens was described by a four paired collection protocol (8 am, 12 am, 4 pm and 8 pm) in 19 healthy subjects. The assay allowed the quantitation of T, A, DHEA and 17OHP down to 3.40, 6.81, 271.0 and 23.7 pmol/L, respectively, with accuracy between 83.0 and 106.1% for all analytes. A parallel diurnal rhythm in saliva and serum was observed for all androgens, with values decreasing from the morning to the evening time points. Salivary androgen levels revealed a high linear correlation with serum counterparts in both sexes (T: R>0.85; A: R>0.90; DHEA: R>0.73 and 17OHP: R>0.89; p<0.0001 for all). Our LC-MS/MS method allowed a sensitive evaluation of androgen salivary levels and represents an optimal technique to explore the relevance of a comprehensive androgen profile as measured in saliva for the study of androgen secretion modulation and activity in physiologic and pathologic states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lou, Jialin; Xia, Yidong; Luo, Lixiang
2016-09-01
In this study, we use a combination of modeling techniques to describe the relationship between fracture radius that might be accomplished in a hypothetical enhanced geothermal system (EGS) and drilling distance required to create and access those fractures. We use a combination of commonly applied analytical solutions for heat transport in parallel fractures and 3D finite-element method models of more realistic heat extraction geometries. For a conceptual model involving multiple parallel fractures developed perpendicular to an inclined or horizontal borehole, calculations demonstrate that EGS will likely require very large fractures, of greater than 300 m radius, to keep interfracture drillingmore » distances to ~10 km or less. As drilling distances are generally inversely proportional to the square of fracture radius, drilling costs quickly escalate as the fracture radius decreases. It is important to know, however, whether fracture spacing will be dictated by thermal or mechanical considerations, as the relationship between drilling distance and number of fractures is quite different in each case. Information about the likelihood of hydraulically creating very large fractures comes primarily from petroleum recovery industry data describing hydraulic fractures in shale. Those data suggest that fractures with radii on the order of several hundred meters may, indeed, be possible. The results of this study demonstrate that relatively simple calculations can be used to estimate primary design constraints on a system, particularly regarding the relationship between generated fracture radius and the total length of drilling needed in the fracture creation zone. Comparison of the numerical simulations of more realistic geometries than addressed in the analytical solutions suggest that simple proportionalities can readily be derived to relate a particular flow field.« less
Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton
2018-03-13
The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.
GWAS with longitudinal phenotypes: performance of approximate procedures
Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel
2015-01-01
Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
A genetic algorithm-based job scheduling model for big data analytics.
Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei
Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.
A monolithic homotopy continuation algorithm with application to computational fluid dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.; Zingg, David W.
2016-09-01
A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.
The Red Book and clinical practice.
Bygott, Catherine
2012-09-01
Jung's work is fundamentally an experience, not an idea. From this perspective, I attempt to bridge conference, consulting room and living psyche by considering the influence of the 'Red Book' on clinical practice through the subtle and imaginal. Jung's journey as a man broadens out to have relevance for women. His story is individual but its archetypal foundation finds parallel expression in analytic practice today. © 2012, The Society of Analytical Psychology.
Multiplex single-molecule interaction profiling of DNA-barcoded proteins.
Gu, Liangcai; Li, Chao; Aach, John; Hill, David E; Vidal, Marc; Church, George M
2014-11-27
In contrast with advances in massively parallel DNA sequencing, high-throughput protein analyses are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule protein detection using optical methods is limited by the number of spectrally non-overlapping chromophores. Here we introduce a single-molecular-interaction sequencing (SMI-seq) technology for parallel protein interaction profiling leveraging single-molecule advantages. DNA barcodes are attached to proteins collectively via ribosome display or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide thin film to construct a random single-molecule array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies) and analysed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimetre. Furthermore, protein interactions can be measured on the basis of the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor and antibody-binding profiling, are demonstrated. SMI-seq enables 'library versus library' screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity.
Multiplex single-molecule interaction profiling of DNA barcoded proteins
Gu, Liangcai; Li, Chao; Aach, John; Hill, David E.; Vidal, Marc; Church, George M.
2014-01-01
In contrast with advances in massively parallel DNA sequencing1, high-throughput protein analyses2-4 are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule (SM) protein detection achieved using optical methods5 is limited by the number of spectrally nonoverlapping chromophores. Here, we introduce a single molecular interaction-sequencing (SMI-Seq) technology for parallel protein interaction profiling leveraging SM advantages. DNA barcodes are attached to proteins collectively via ribosome display6 or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide (PAA) thin film to construct a random SM array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies)7 and analyzed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimeter. Furthermore, protein interactions can be measured based on the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor (GPCR) and antibody binding profiling, were demonstrated. SMI-Seq enables “library vs. library” screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity. PMID:25252978
Parallel pumping of a ferromagnetic nanostripe: Confinement quantization and off-resonant driving
NASA Astrophysics Data System (ADS)
Yarbrough, P. M.; Livesey, K. L.
2018-01-01
The parametric excitation of spin waves in a rectangular, ferromagnetic nanowire in the parallel pump configuration and with an applied field along the long axis of the wire is studied theoretically, using a semi-classical and semi-analytic Hamiltonian approach. We find that as a function of static applied field strength, there are jumps in the pump power needed to excite thermal spin waves. At these jumps, there is the possibility to non-resonantly excite spin waves near kz = 0. Spin waves with negative or positive group velocity and with different standing wave structures across the wire width can be excited by tuning the applied field. By using a magnetostatic Green's function that depends on both the nanowire's width and thickness—rather than just its aspect ratio—we also find that the threshold field strength varies considerably for nanowires with the same aspect ratio but of different sizes. Comparisons between different methods of calculations are made and the advantages and disadvantages of each are discussed.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.
Hendler, R W; Shrager, R; Bose, S
2001-04-26
In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.
New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications
NASA Astrophysics Data System (ADS)
Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris
2016-05-01
Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.
Petruzziello, Filomena; Grand-Guillaume Perrenoud, Alexandre; Thorimbert, Anita; Fogwill, Michael; Rezzi, Serge
2017-07-18
Analytical solutions enabling the quantification of circulating levels of liposoluble micronutrients such as vitamins and carotenoids are currently limited to either single or a reduced panel of analytes. The requirement to use multiple approaches hampers the investigation of the biological variability on a large number of samples in a time and cost efficient manner. With the goal to develop high-throughput and robust quantitative methods for the profiling of micronutrients in human plasma, we introduce a novel, validated workflow for the determination of 14 fat-soluble vitamins and carotenoids in a single run. Automated supported liquid extraction was optimized and implemented to simultaneously parallelize 48 samples in 1 h, and the analytes were measured using ultrahigh-performance supercritical fluid chromatography coupled to tandem mass spectrometry in less than 8 min. An improved mass spectrometry interface hardware was built up to minimize the post-decompression volume and to allow better control of the chromatographic effluent density on its route toward and into the ion source. In addition, a specific make-up solvent condition was developed to ensure both analytes and matrix constituents solubility after mobile phase decompression. The optimized interface resulted in improved spray plume stability and conserved matrix compounds solubility leading to enhanced hyphenation robustness while ensuring both suitable analytical repeatability and improved the detection sensitivity. The overall developed methodology gives recoveries within 85-115%, as well as within and between-day coefficient of variation of 2 and 14%, respectively.
A Paper-Based Electrochromic Array for Visualized Electrochemical Sensing.
Zhang, Fengling; Cai, Tianyi; Ma, Liang; Zhan, Liyuan; Liu, Hong
2017-01-31
We report a battery-powered, paper-based electrochromic array for visualized electrochemical sensing. The paper-based sensing system consists of six parallel electrochemical cells, which are powered by an aluminum-air battery. Each single electrochemical cell uses a Prussian Blue spot electrodeposited on an indium-doped tin oxide thin film as the electrochromic indicator. Each electrochemical cell is preloaded with increasing amounts of analyte. The sample activates the battery for the sensing. Both the preloaded analyte and the analyte in the sample initiate the color change of Prussian Blue to Prussian White. With a reaction time of 60 s, the number of electrochemical cells with complete color changes is correlated to the concentration of analyte in the sample. As a proof-of-concept analyte, lactic acid was detected semi-quantitatively using the naked eye.
Well test mathematical model for fractures network in tight oil reservoirs
NASA Astrophysics Data System (ADS)
Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming
2018-02-01
Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.
Valverde-Som, Lucia; Ruiz-Samblás, Cristina; Rodríguez-García, Francisco P; Cuadros-Rodríguez, Luis
2018-02-09
The organoleptic quality of virgin olive oil depends on positive and negative sensory attributes. These attributes are related to volatile organic compounds and phenolic compounds that represent the aroma and taste (flavour) of the virgin olive oil. The flavour is the characteristic that can be measured by a taster panel. However, as for any analytical measuring device, the tasters, individually, and the panel, as a whole, should be harmonized and validated and proper olive oil standards are needed. In the present study, multivariate approaches are put into practice in addition to the rules to build a multivariate control chart from chromatographic volatile fingerprinting and chemometrics. Fingerprinting techniques provide analytical information without identify and quantify the analytes. This methodology is used to monitor the stability of sensory reference materials. The similarity indices have been calculated to build multivariate control chart with two olive oils certified reference materials that have been used as examples to monitor their stabilities. This methodology with chromatographic data could be applied in parallel with the 'panel test' sensory method to reduce the work of sensory analysis. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
Bantle, J. W.
1985-01-01
Aerodynamic interference effects were studied for two slender, streamlined bodies of revolution at Mach 2.7. A wind tunnel investigation produced force and moment data and measurements of pressure distributions on the bodies. As these bodies remained parallel with each other and with the freestream flow, their relative lateral and longitudinal spacing were varied. Results of theoretical methods were used in the analysis of results. The interference effects between the two bodies yielded less total drag than a single body of equal total volume and the same length.
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Park, Young-Keun
2004-01-01
A series of three dimensional simulations has been performed to investigate analytically the effect of insulating foam impacts on ceramic tile and reinforced carbon-carbon components of the Space Shuttle thermal protection system. The simulations employed a hybrid particle-finite element method and a parallel code developed for use in spacecraft design applications. The conclusions suggested by the numerical study are in general consistent with experiment. The results emphasize the need for additional material testing work on the dynamic mechanical response of thermal protection system materials, and additional impact experiments for use in validating computational models of impact effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ang; Song, Shuaiwen; Brugel, Eric
To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.
Fast mass spectrometry-based enantiomeric excess determination of proteinogenic amino acids.
Fleischer, Heidi; Thurow, Kerstin
2013-03-01
A rapid determination of the enantiomeric excess of proteinogenic amino acids is of great importance in various fields of chemical and biologic research and industries. Owing to their different biologic effects, enantiomers are interesting research subjects in drug development for the design of new and more efficient pharmaceuticals. Usually, the enantiomeric composition of amino acids is determined by conventional analytical methods such as liquid or gas chromatography or capillary electrophoresis. These analytical techniques do not fulfill the requirements of high-throughput screening due to their relative long analysis times. The method presented allows a fast analysis of chiral amino acids without previous time consuming chromatographic separation. The analytical measurements base on parallel kinetic resolution with pseudoenantiomeric mass tagged auxiliaries and were carried out by mass spectrometry with electrospray ionization. All 19 chiral proteinogenic amino acids were tested and Pro, Ser, Trp, His, and Glu were selected as model substrates for verification measurements. The enantiomeric excesses of amino acids with non-polar and aliphatic side chains as well as Trp and Phe (aromatic side chains) were determined with maximum deviations of the expected value less than or equal to 10ee%. Ser, Cys, His, Glu, and Asp were determined with deviations lower or equal to 14ee% and the enantiomeric excess of Tyr were calculated with 17ee% deviation. The total screening process is fully automated from the sample pretreatment to the data processing. The method presented enables fast measurement times about 1.38 min per sample and is applicable in the scope of high-throughput screenings.
Distributed and parallel approach for handle and perform huge datasets
NASA Astrophysics Data System (ADS)
Konopko, Joanna
2015-12-01
Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.
Flow of nanofluid past a Riga plate
NASA Astrophysics Data System (ADS)
Ahmad, Adeel; Asghar, Saleem; Afzal, Sumaira
2016-03-01
This paper studies the mixed convection boundary layer flow of a nanofluid past a vertical Riga plate in the presence of strong suction. The mathematical model incorporates the Brownian motion and thermophoresis effects due to nanofluid and the Grinberg-term for the wall parallel Lorentz force due to Riga plate. The analytical solution of the problem is presented using the perturbation method for small Brownian and thermophoresis diffusion parameters. The numerical solution is also presented to ensure the reliability of the asymptotic method. The comparison of the two solutions shows an excellent agreement. The correlation expressions for skin friction, Nusselt number and Sherwood number are developed by performing linear regression on the obtained numerical data. The effects of nanofluid and the Lorentz force due to Riga plate, on the skin friction are discussed.
NASA Astrophysics Data System (ADS)
Loyau, V.; Aubert, A.; LoBue, M.; Mazaleyrat, F.
2017-03-01
In this paper, we investigate the demagnetizing effect in ferrite/PZT/ferrite magnetoelectric (ME) trilayer composites consisting of commercial PZT discs bonded by epoxy layers to Ni-Co-Zn ferrite discs made by a reactive Spark Plasma Sintering (SPS) technique. ME voltage coefficients (transversal mode) were measured on ferrite/PZT/ferrite trilayer ME samples with different thicknesses or phase volume ratio in order to highlight the influence of the magnetic field penetration governed by these geometrical parameters. Experimental ME coefficients and voltages were compared to analytical calculations using a quasi-static model. Theoretical demagnetizing factors of two magnetic discs that interact together in parallel magnetic structures were derived from an analytical calculation based on a superposition method. These factors were introduced in ME voltage calculations which take account of the demagnetizing effect. To fit the experimental results, a mechanical coupling factor was also introduced in the theoretical formula. This reflects the differential strain that exists in the ferrite and PZT layers due to shear effects near the edge of the ME samples and within the bonding epoxy layers. From this study, an optimization in magnitude of the ME voltage is obtained. Lastly, an analytical calculation of demagnetizing effect was conducted for layered ME composites containing higher numbers of alternated layers (n ≥ 5). The advantage of such a structure is then discussed.
NMR and MS Methods for Metabolomics.
Amberg, Alexander; Riefke, Björn; Schlotterbeck, Götz; Ross, Alfred; Senn, Hans; Dieterle, Frank; Keck, Matthias
2017-01-01
Metabolomics, also often referred as "metabolic profiling," is the systematic profiling of metabolites in biofluids or tissues of organisms and their temporal changes. In the last decade, metabolomics has become more and more popular in drug development, molecular medicine, and other biotechnology fields, since it profiles directly the phenotype and changes thereof in contrast to other "-omics" technologies. The increasing popularity of metabolomics has been possible only due to the enormous development in the technology and bioinformatics fields. In particular, the analytical technologies supporting metabolomics, i.e., NMR, UPLC-MS, and GC-MS, have evolved into sensitive and highly reproducible platforms allowing the determination of hundreds of metabolites in parallel. This chapter describes the best practices of metabolomics as seen today. All important steps of metabolic profiling in drug development and molecular medicine are described in great detail, starting from sample preparation to determining the measurement details of all analytical platforms, and finally to discussing the corresponding specific steps of data analysis.
NMR and MS methods for metabonomics.
Dieterle, Frank; Riefke, Björn; Schlotterbeck, Götz; Ross, Alfred; Senn, Hans; Amberg, Alexander
2011-01-01
Metabonomics, also often referred to as "metabolomics" or "metabolic profiling," is the systematic profiling of metabolites in bio-fluids or tissues of organisms and their temporal changes. In the last decade, metabonomics has become increasingly popular in drug development, molecular medicine, and other biotechnology fields, since it profiles directly the phenotype and changes thereof in contrast to other "-omics" technologies. The increasing popularity of metabonomics has been possible only due to the enormous development in the technology and bioinformatics fields. In particular, the analytical technologies supporting metabonomics, i.e., NMR, LC-MS, UPLC-MS, and GC-MS have evolved into sensitive and highly reproducible platforms allowing the determination of hundreds of metabolites in parallel. This chapter describes the best practices of metabonomics as seen today. All important steps of metabolic profiling in drug development and molecular medicine are described in great detail, starting from sample preparation, to determining the measurement details of all analytical platforms, and finally, to discussing the corresponding specific steps of data analysis.
Polymer mobilization and drug release during tablet swelling. A 1H NMR and NMR microimaging study.
Dahlberg, Carina; Fureby, Anna; Schuleit, Michael; Dvinskikh, Sergey V; Furó, István
2007-09-26
The objective of this study was to investigate the swelling characteristics of a hydroxypropyl methylcellulose (HPMC) matrix incorporating the hydrophilic drug antipyrine. We have used this matrix to introduce a novel analytical method, which allows us to obtain within one experimental setup information about the molecular processes of the polymer carrier and its impact on drug release. Nuclear magnetic resonance (NMR) imaging revealed in situ the swelling behavior of tablets when exposed to water. By using deuterated water, the spatial distribution and molecular dynamics of HPMC and their kinetics during swelling could be observed selectively. In parallel, NMR spectroscopy provided the concentration of the drug released into the aqueous phase. We find that both swelling and release are diffusion controlled. The ability of monitoring those two processes using the same experimental setup enables mapping their interconnection, which points on the importance and potential of this analytical technique for further application in other drug delivery forms.
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
Piccirilli, Gisela N; Escandar, Graciela M
2006-09-01
This paper demonstrates for the first time the power of a chemometric second-order algorithm for predicting, in a simple way and using spectrofluorimetric data, the concentration of analytes in the presence of both the inner-filter effect and unsuspected species. The simultaneous determination of the systemic fungicides carbendazim and thiabendazole was achieved and employed for the discussion of the scopes of the applied second-order chemometric tools: parallel factor analysis (PARAFAC) and partial least-squares with residual bilinearization (PLS/RBL). The chemometric study was performed using fluorescence excitation-emission matrices obtained after the extraction of the analytes over a C18-membrane surface. The ability of PLS/RBL to recognize and overcome the significant changes produced by thiabendazole in both the excitation and emission spectra of carbendazim is demonstrated. The high performance of the selected PLS/RBL method was established with the determination of both pesticides in artificial and real samples.
Nonlinear Gyro-Landau-Fluid Equations
NASA Astrophysics Data System (ADS)
Raskolnikov, I.; Mattor, Nathan; Parker, Scott E.
1996-11-01
We present fluid equations which describe the effects of both linear and nonlinear Landau damping (wave-particle-wave effects). These are derived using a recently developed analytical method similar to renormalization group theory. (Scott E. Parker and Daniele Carati, Phys. Rev. Lett. 75), 441 (1995). In this technique, the phase space structure inherent in Landau damping is treated analytically by building a ``renormalized collisionality'' onto a bare collisionality (which may be taken as vanishingly small). Here we apply this technique to the nonlinear ion gyrokinetic equation in slab geometry, obtaining nonlinear fluid equations for density, parallel momentum and heat. Wave-particle resonances are described by two functions appearing in the heat equation: a renormalized ``collisionality'' and a renormalized nonlinear coupling coeffient. It will be shown that these new equations may correct a deficiency in existing gyrofluid equations, (G. W. Hammett and F. W. Perkins, Phys. Rev. Lett. 64,) 3019 (1990). which can severely underestimate the strength of nonlinear interaction in regimes where linear resonance is strong. (N. Mattor, Phys. Fluids B 4,) 3952 (1992).
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
A novel control algorithm for interaction between surface waves and a permeable floating structure
NASA Astrophysics Data System (ADS)
Tsai, Pei-Wei; Alsaedi, A.; Hayat, T.; Chen, Cheng-Wu
2016-04-01
An analytical solution is undertaken to describe the wave-induced flow field and the surge motion of a permeable platform structure with fuzzy controllers in an oceanic environment. In the design procedure of the controller, a parallel distributed compensation (PDC) scheme is utilized to construct a global fuzzy logic controller by blending all local state feedback controllers. A stability analysis is carried out for a real structure system by using Lyapunov method. The corresponding boundary value problems are then incorporated into scattering and radiation problems. They are analytically solved, based on separation of variables, to obtain series solutions in terms of the harmonic incident wave motion and surge motion. The dependence of the wave-induced flow field and its resonant frequency on wave characteristics and structure properties including platform width, thickness and mass has been thus drawn with a parametric approach. From which mathematical models are applied for the wave-induced displacement of the surge motion. A nonlinearly inverted pendulum system is employed to demonstrate that the controller tuned by swarm intelligence method can not only stabilize the nonlinear system, but has the robustness against external disturbance.
Experimental and CFD evidence of multiple solutions in a naturally ventilated building.
Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z
2004-02-01
This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Investigating a method of producing "red and dead" galaxies
NASA Astrophysics Data System (ADS)
Skory, Stephen
2010-08-01
In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles, but has memory bottlenecks as the datasets get larger. These bottlenecks inspired the second version, "Parallel HOP," which is a fully parallelized method and implementation of HOP that has worked on datasets with more than 20483 particles on hundreds of processing cores. Both methods are described in detail, as are the various effects of performance-related runtime options. Additionally, both halo finders are subjected to a full suite of performance benchmarks varying both dataset sizes and computational resources used. I conclude with descriptions of four new tools I added to yt. A Parallel Structure Function Generator allows analysis of two-point functions, such as correlation functions, using memory- and workload-parallelism. A Parallel Merger Tree Generator leverages the parallel halo finders in yt, such as Parallel HOP, to build the merger tree of halos in a cosmological simulation, and outputs the result to a SQLite database for simple and powerful data extraction. A Star Particle Analysis toolkit takes a group of star particles and can output the rate of formation as a function of time, and/or a synthetic Spectral Energy Distribution (S.E.D.) using the Bruzual and Charlot (2003) data tables. Finally, a Halo Mass Function toolkit takes as input a list of halo masses and can output the halo mass function for the halos, as well as an analytical fit for those halos using several previously published fits.
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
NASA Astrophysics Data System (ADS)
Bogdanov, Valery L.; Boyce-Jacino, Michael
1999-05-01
Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.
Lashgari, Maryam; Lee, Hian Kee
2014-11-21
In the current study, a simple, fast and efficient combination of protein precipitation and micro-solid phase extraction (μ-SPE) followed by liquid chromatography-triple quadrupole tandem mass spectrometry (LC-MS/MS) was developed for the determination of perfluorinated carboxylic acids (PFCAs) in fish fillet. Ten PFCAs with different hydrocarbon chain lengths (C5-C14) were analysed simultaneously using this method. Protein precipitation by acetonitrile and μ-SPE by surfactant-incorporated ordered mesoporous silica were applied to the extraction and concentration of the PFCAs as well as for removal of interferences. Determination of the PFCAs was carried out by LC-MS/MS in negative electrospray ionization mode. MS/MS parameters were optimized for multiple reaction monitoring of the analytes. (13)C mass labelled PFOA as a stable-isotopic internal standard, was used for calibration. The detection limits of the method ranged from 0.97 ng/g to 2.7 ng/g, with a relative standard deviation of between 5.4 and 13.5. The recoveries were evaluated for each analyte and were ranged from 77% to 120%. The t-test at 95% confidence level showed that for all the analytes, the relative recoveries did not depend on their concentrations in the explored concentration range. The effect of the matrix on MS signals (suppression or enhancement) was also evaluated. Contamination at low levels was detected for some analytes in the fish samples. The protective role of the polypropylene membrane used in μ-SPE in the elimination of matrix effects was evaluated by parallel experiments in classical dispersive solid phase extraction. The results evidently showed that the polypropylene membrane was significantly effective in reducing matrix effects. Copyright © 2014 Elsevier B.V. All rights reserved.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Nahorniak, Michelle L; Booksh, Karl S
2006-12-01
A field portable, single exposure excitation-emission matrix (EEM) fluorometer has been constructed and used in conjunction with parallel factor analysis (PARAFAC) to determine the sub part per billion (ppb) concentrations of several aqueous polycyclic aromatic hydrocarbons (PAHs), such as benzo(k)fluoranthene and benzo(a)pyrene, in various matrices including aqueous motor oil extract and asphalt leachate. Multiway methods like PARAFAC are essential to resolve the analyte signature from the ubiquitous background in environmental samples. With multiway data and PARAFAC analysis it is shown that reliable concentration determinations can be achieved with minimal standards in spite of the large convoluting fluorescence background signal. Thus, rapid fieldable EEM analyses may prove to be a good screening method for tracking pollutants and prioritizing sampling and analysis by more complete but time consuming and labor intensive EPA methods.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less
Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise
NASA Astrophysics Data System (ADS)
Groeneboom, N. E.; Dahle, H.
2014-03-01
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Line and point defects in nonlinear anisotropic solids
NASA Astrophysics Data System (ADS)
Golgoon, Ashkan; Yavari, Arash
2018-06-01
In this paper, we present some analytical solutions for the stress fields of nonlinear anisotropic solids with distributed line and point defects. In particular, we determine the stress fields of (i) a parallel cylindrically symmetric distribution of screw dislocations in infinite orthotropic and monoclinic media, (ii) a cylindrically symmetric distribution of parallel wedge disclinations in an infinite orthotropic medium, (iii) a distribution of edge dislocations in an orthotropic medium, and (iv) a spherically symmetric distribution of point defects in a transversely isotropic spherical ball.
A multioutput LLC-type parallel resonant converter
NASA Astrophysics Data System (ADS)
Liu, Rui; Lee, C. Q.; Upadhyay, Anand K.
1992-07-01
When an LLC-type parallel resonant converter (LLC-PRC) operates above resonant frequency, the switching transistors can be turned off at zero voltage. Further study reveals that the LLC-PRC possesses the advantage of lower converter voltage gain as compared with the conventional PRC. Based on analytic results, a complete set of design curves is obtained, from which a systematic design procedure is developed. Experimental results from a 150 W 150 kHz multioutput LLC-type PRC power supply are presented.
Heuristic and analytic processes in reasoning: an event-related potential study of belief bias.
Banks, Adrian P; Hope, Christopher
2014-03-01
Human reasoning involves both heuristic and analytic processes. This study of belief bias in relational reasoning investigated whether the two processes occur serially or in parallel. Participants evaluated the validity of problems in which the conclusions were either logically valid or invalid and either believable or unbelievable. Problems in which the conclusions presented a conflict between the logically valid response and the believable response elicited a more positive P3 than problems in which there was no conflict. This shows that P3 is influenced by the interaction of belief and logic rather than either of these factors on its own. These findings indicate that belief and logic influence reasoning at the same time, supporting models in which belief-based and logical evaluations occur in parallel but not theories in which belief-based heuristic evaluations precede logical analysis.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
On the stability of nongyrotropic ion populations - A first (analytic and simulation) assessment
NASA Technical Reports Server (NTRS)
Brinca, A. L.; Borda De Agua, L.; Winske, D.
1993-01-01
The wave and dispersion equations for perturbations propagating parallel to an ambient magnetic field in magnetoplasmas with nongyrotropic ion populations show, in general, the occurrence of coupling between the parallel (left- and right-hand circularly polarized electromagnetic and longitudinal electrostatic) eigenmodes of the associated gyrotropic medium. These interactions provide a means to driving linearly one mode with free-energy sources of other modes in homogeneous media. Different types of nongyrotropy bring about distinct classes of coupling. The stability of a hydrogen magnetoplasma with anisotropic, nongyrotropic protons that only couple the electromagnetic modes to each other is investigated analytically (via solution of the derived dispersion equation) and numerically (via simulation with a hybrid code). Nongyrotropy enhances growth and enlarges the unstable spectral range relative to the corresponding gyrotropic situation. The relevance of the properties of nongyrotropic populations to space plasma environments is also discussed.
An Analytical Time–Domain Expression for the Net Ripple Produced by Parallel Interleaved Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B.; Krein, Philip T.
We apply modular arithmetic and Fourier series to analyze the superposition of N interleaved triangular waveforms with identical amplitudes and duty-ratios. Here, interleaving refers to the condition when a collection of periodic waveforms with identical periods are each uniformly phase-shifted across one period. The main result is a time-domain expression which provides an exact representation of the summed and interleaved triangular waveforms, where the peak amplitude and parameters of the time-periodic component are all specified in closed-form. Analysis is general and can be used to study various applications in multi-converter systems. This model is unique not only in that itmore » reveals a simple and intuitive expression for the net ripple, but its derivation via modular arithmetic and Fourier series is distinct from prior approaches. The analytical framework is experimentally validated with a system of three parallel converters under time-varying operating conditions.« less
Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2014-01-01
In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
MERRA/AS: The MERRA Analytic Services Project Interim Report
NASA Technical Reports Server (NTRS)
Schnase, John; Duffy, Dan; Tamkin, Glenn; Nadeau, Denis; Thompson, Hoot; Grieg, Cristina; Luczak, Ed; McInerney, Mark
2013-01-01
MERRA AS is a cyberinfrastructure resource that will combine iRODS-based Climate Data Server (CDS) capabilities with Coudera MapReduce to serve MERRA analytic products, store the MERRA reanalysis data collection in an HDFS to enable parallel, high-performance, storage-side data reductions, manage storage-side driver, mapper, reducer code sets and realized objects for users, and provide a library of commonly used spatiotemporal operations that can be composed to enable higher-order analyses.
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce
2018-05-30
Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Wang, Zhonghe; Yu, Jing; Yao, Jiaxi; Wu, Linlin; Xiao, Hang; Wang, Jun; Gao, Rong
2018-02-10
A method for the identification and quantification of bisphenol A and 12 bisphenol analogues in river water and sediment samples combining liquid-liquid extraction, precolumn derivatization, and ultra high-performance liquid chromatography coupled with tandem mass spectrometry was developed and validated. Analytes were extracted from the river water sample using a liquid-liquid extraction method. Dansyl chloride was selected as a derivatization reagent. Derivatization reaction conditions affecting production of the dansyl derivatives were tested and optimized. All the derivatized target compounds were well separated and eluted in 10 min. Dansyl chloride labeled compounds were analyzed using a high-resolution mass spectrometer with electrospray ionization in the positive mode, and the results were confirmed and quantified in the parallel reaction monitoring mode. The method validation results showed a satisfactory level of sensitivity. Linearity was assessed using matrix-matched standard calibration, and good correlation coefficients were obtained. The limits of quantification for the analytes ranged from 0.005 to 0.02 ng/mL in river water and from 0.15 to 0.80 ng/g in sediment. Good reproducibility of the method in terms of intra- and interday precision was achieved, yielding relative standard deviations of less than 10.1 and 11.6%, respectively. Finally, this method was successfully applied to the analysis of real samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Buren, Mandula; Jian, Yongjun; Zhao, Yingchun; Chang, Long
2018-05-01
In this paper we analytically investigate the electroviscous effect and electrokinetic energy conversion in the time periodic pressure-driven flow of an incompressible viscous Newtonian liquid through a parallel-plate nanochannel with surface charge-dependent slip. Analytical and semi-analytical solutions for electric potential, velocity and streaming electric field are obtained and are utilized to compute electrokinetic energy conversion efficiency. The results show that velocity amplitude and energy conversion efficiency are reduced when the effect of surface charge on slip length is considered. The surface charge effect increases with zeta potential and ionic concentration. In addition, the energy conversion efficiency is large when the ratio of channel half-height to the electric double layer thickness is small. The boundary slip results in a large increase in energy conversion. Higher values of the frequency of pressure pulsation lead to higher values of the energy conversion efficiency. We also obtain the energy conversion efficiency in constant pressure-driven flow and find that the energy conversion efficiency in periodical pressure-driven flow becomes larger than that in constant pressure-driven flow when the frequency is large enough.
Mladic, Marija; Zietek, Barbara M; Iyer, Janaki Krishnamoorthy; Hermarij, Philip; Niessen, Wilfried M A; Somsen, Govert W; Kini, R Manjunatha; Kool, Jeroen
2016-02-01
Snake venoms comprise complex mixtures of peptides and proteins causing modulation of diverse physiological functions upon envenomation of the prey organism. The components of snake venoms are studied as research tools and as potential drug candidates. However, the bioactivity determination with subsequent identification and purification of the bioactive compounds is a demanding and often laborious effort involving different analytical and pharmacological techniques. This study describes the development and optimization of an integrated analytical approach for activity profiling and identification of venom constituents targeting the cardiovascular system, thrombin and factor Xa enzymes in particular. The approach developed encompasses reversed-phase liquid chromatography (RPLC) analysis of a crude snake venom with parallel mass spectrometry (MS) and bioactivity analysis. The analytical and pharmacological part in this approach are linked using at-line nanofractionation. This implies that the bioactivity is assessed after high-resolution nanofractionation (6 s/well) onto high-density 384-well microtiter plates and subsequent freeze drying of the plates. The nanofractionation and bioassay conditions were optimized for maintaining LC resolution and achieving good bioassay sensitivity. The developed integrated analytical approach was successfully applied for the fast screening of snake venoms for compounds affecting thrombin and factor Xa activity. Parallel accurate MS measurements provided correlation of observed bioactivity to peptide/protein masses. This resulted in identification of a few interesting peptides with activity towards the drug target factor Xa from a screening campaign involving venoms of 39 snake species. Besides this, many positive protease activity peaks were observed in most venoms analysed. These protease fingerprint chromatograms were found to be similar for evolutionary closely related species and as such might serve as generic snake protease bioactivity fingerprints in biological studies on venoms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rossotti, Martín; Tabares, Sofía; Alfaya, Lucía; Leizagoyen, Carmen; Moron, Gabriel; González-Sapienza, Gualberto
2015-01-01
BACKGROUND Owing to their minimal size, high production yield, versatility and robustness, the recombinant variable domain (nanobody) of camelid single chain antibodies are valued affinity reagents for research, diagnostic, and therapeutic applications. While their preparation against purified antigens is straightforward, the generation of nanobodies to difficult targets such as multi-pass or complex membrane cell receptors remains challenging. Here we devised a platform for high throughput identification of nanobodies to cell receptor based on the use of a biotin handle. METHODS Using a biotin-acceptor peptide tag, the in vivo biotinylation of nanobodies in 96 well culture blocks was optimized allowing their parallel analysis by flow cytometry and ELISA, and their direct used for pull-down/MS target identification. RESULTS The potential of this strategy was demonstrated by the selection and characterization of panels of nanobodies to Mac-1 (CD11b/CD18), MHC II and the mouse Ly-5 leukocyte common antigen (CD45) receptors, from a VHH library obtained from a llama immunized with mouse bone marrow derived dendritic cells. By on and off switching of the addition of biotin, the method also allowed the epitope binning of the selected Nbs directly on cells. CONCLUSIONS This strategy streamline the selection of potent nanobodies to complex antigens, and the selected nanobodies constitute ready-to-use biotinylated reagents. GENERAL SIGNIFICANCE This method will accelerate the discovery of nanobodies to cell membrane receptors which comprise the largest group of drug and analytical targets. PMID:25819371
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantzis, G; Leventouri, T; Tachibana, H
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less
NASA Astrophysics Data System (ADS)
Mackowski, Daniel; Ramezanpour, Bahareh
2018-07-01
A formulation is developed for numerically solving the frequency domain Maxwell's equations in plane parallel layers of inhomogeneous media. As was done in a recent work [1], the plane parallel layer is modeled as an infinite square lattice of W × W × H unit cells, with W being a sample width of the layer and H the layer thickness. As opposed to the 3D volume integral/discrete dipole formulation, the derivation begins with a Fourier expansion of the electric field amplitude in the lateral plane, and leads to a coupled system of 1D ordinary differential equations in the depth direction of the layer. A 1D dyadic Green's function is derived for this system and used to construct a set of coupled 1D integral equations for the field expansion coefficients. The resulting mathematical formulation is considerably simpler and more compact than that derived, for the same system, using the discrete dipole approximation applied to the periodic plane lattice. Furthermore, the fundamental property variable appearing in the formulation is the Fourier transformed complex permittivity distribution in the unit cell, and the method obviates any need to define or calculate a dipole polarizability. Although designed primarily for random media calculations, the method is also capable of predicting the single scattering properties of individual particles; comparisons are presented to demonstrate that the method can accurately reproduce, at scattering angles not too close to 90°, the polarimetric scattering properties of single and multiple spheres. The derivation of the dyadic Green's function allows for an analytical preconditioning of the equations, and it is shown that this can result in significantly accelerated solution times when applied to densely-packed systems of particles. Calculation results demonstrate that the method, when applied to inhomogeneous media, can predict coherent backscattering and polarization opposition effects.
Intelligent failure-tolerant control
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.
Dahmen, Tim; Kohr, Holger; de Jonge, Niels; Slusallek, Philipp
2015-06-01
Combined tilt- and focal series scanning transmission electron microscopy is a recently developed method to obtain nanoscale three-dimensional (3D) information of thin specimens. In this study, we formulate the forward projection in this acquisition scheme as a linear operator and prove that it is a generalization of the Ray transform for parallel illumination. We analytically derive the corresponding backprojection operator as the adjoint of the forward projection. We further demonstrate that the matched backprojection operator drastically improves the convergence rate of iterative 3D reconstruction compared to the case where a backprojection based on heuristic weighting is used. In addition, we show that the 3D reconstruction is of better quality.
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Knepper, Andreas; Heiser, Michael; Glauche, Florian; Neubauer, Peter
2014-12-01
The enormous variation possibilities of bioprocesses challenge process development to fix a commercial process with respect to costs and time. Although some cultivation systems and some devices for unit operations combine the latest technology on miniaturization, parallelization, and sensing, the degree of automation in upstream and downstream bioprocess development is still limited to single steps. We aim to face this challenge by an interdisciplinary approach to significantly shorten development times and costs. As a first step, we scaled down analytical assays to the microliter scale and created automated procedures for starting the cultivation and monitoring the optical density (OD), pH, concentrations of glucose and acetate in the culture medium, and product formation in fed-batch cultures in the 96-well format. Then, the separate measurements of pH, OD, and concentrations of acetate and glucose were combined to one method. This method enables automated process monitoring at dedicated intervals (e.g., also during the night). By this approach, we managed to increase the information content of cultivations in 96-microwell plates, thus turning them into a suitable tool for high-throughput bioprocess development. Here, we present the flowcharts as well as cultivation data of our automation approach. © 2014 Society for Laboratory Automation and Screening.
Data-Driven Significance Estimation for Precise Spike Correlation
Grün, Sonja
2009-01-01
The mechanisms underlying neuronal coding and, in particular, the role of temporal spike coordination are hotly debated. However, this debate is often confounded by an implicit discussion about the use of appropriate analysis methods. To avoid incorrect interpretation of data, the analysis of simultaneous spike trains for precise spike correlation needs to be properly adjusted to the features of the experimental spike trains. In particular, nonstationarity of the firing of individual neurons in time or across trials, a spike train structure deviating from Poisson, or a co-occurrence of such features in parallel spike trains are potent generators of false positives. Problems can be avoided by including these features in the null hypothesis of the significance test. In this context, the use of surrogate data becomes increasingly important, because the complexity of the data typically prevents analytical solutions. This review provides an overview of the potential obstacles in the correlation analysis of parallel spike data and possible routes to overcome them. The discussion is illustrated at every stage of the argument by referring to a specific analysis tool (the Unitary Events method). The conclusions, however, are of a general nature and hold for other analysis techniques. Thorough testing and calibration of analysis tools and the impact of potentially erroneous preprocessing stages are emphasized. PMID:19129298
Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm
NASA Astrophysics Data System (ADS)
Boytsov, A. Yu.; Bulychev, A. A.
2018-04-01
Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.
Cobalt adatoms on graphene: Effects of anisotropies on the correlated electronic structure
NASA Astrophysics Data System (ADS)
Mozara, R.; Valentyuk, M.; Krivenko, I.; Şaşıoǧlu, E.; Kolorenč, J.; Lichtenstein, A. I.
2018-02-01
Impurities on surfaces experience a geometric symmetry breaking induced not only by the on-site crystal-field splitting and the orbital-dependent hybridization, but also by different screening of the Coulomb interaction in different directions. We present a many-body study of the Anderson impurity model representing a Co adatom on graphene, taking into account all anisotropies of the effective Coulomb interaction, which we obtained by the constrained random-phase approximation. The most pronounced differences are naturally displayed by the many-body self-energy projected onto the single-particle states. For the solution of the Anderson impurity model and analytical continuation of the Matsubara data, we employed new implementations of the continuous-time hybridization expansion quantum Monte Carlo and the stochastic optimization method, and we verified the results in parallel with the exact diagonalization method.
Nanostructured 2D cellular materials in silicon by sidewall transfer lithography NEMS
NASA Astrophysics Data System (ADS)
Syms, Richard R. A.; Liu, Dixi; Ahmad, Munir M.
2017-07-01
Sidewall transfer lithography (STL) is demonstrated as a method for parallel fabrication of 2D nanostructured cellular solids in single-crystal silicon. The linear mechanical properties of four lattices (perfect and defected diamond; singly and doubly periodic honeycomb) with low effective Young’s moduli and effective Poisson’s ratio ranging from positive to negative are modelled using analytic theory and the matrix stiffness method with an emphasis on boundary effects. The lattices are fabricated with a minimum feature size of 100 nm and an aspect ratio of 40:1 using single- and double-level STL and deep reactive ion etching of bonded silicon-on-insulator. Nanoelectromechanical systems (NEMS) containing cellular materials are used to demonstrate stretching, bending and brittle fracture. Predicted edge effects are observed, theoretical values of Poisson’s ratio are verified and failure patterns are described.
Comparison of Virtual Oscillator and Droop Control: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Rodriguez, Miguel; Dhople, Sairaj
Virtual oscillator control and droop control are two techniques that can be used to ensure synchronization and power sharing of parallel inverters in islanded operation. VOC relies on the implementation of non-linear Van der Pol oscillator equations in the control system of the inverter, acting upon the time-domain instantaneous inverter current and terminal voltage. On the other hand, DC explicitly computes active and reactive power produced by the inverter and relies on limited bandwidth low-pass filters. Even though both methods can be engineered to produce the same steady-state characteristics, their dynamic performances are significantly different. This paper presents analytical andmore » experimental results that aim to compare both methods. It is shown that VOC is inherently faster and enables minimizing the circulating currents. The results are verified using three 120V, 1kW inverters.« less
Reflection of solar radiation by a cylindrical cloud
NASA Technical Reports Server (NTRS)
Smith, G. L.
1989-01-01
Potential applications of an analytic method for computing the solar radiation reflected by a cylindrical cloud are discussed, including studies of radiative transfer within finite clouds and evaluations of these effects on other clouds and on remote sensing problems involving finite clouds. The pattern of reflected sunlight from a cylindrical cloud as seen at a large distance has been considered and described by the bidirectional function method for finite cloud analysis, as previously studied theoretically for plane-parallel atmospheres by McKee and Cox (1974); Schmetz and Raschke (1981); and Stuhlmann et al. (1985). However, the lack of three-dimensional radiative transfer solutions for anisotropic scattering media have hampered theoretical investigations of bidirectional functions for finite clouds. The present approach permits expression of the directional variation of the radiation field as a spherical harmonic series to any desired degree and order.
Long-term detection of methyltestosterone (ab-) use by a yeast transactivation system.
Wolf, Sylvi; Diel, Patrick; Parr, Maria Kristina; Rataj, Felicitas; Schänzer, Willhelm; Vollmer, Günter; Zierau, Oliver
2011-04-01
The routinely used analytical method for detecting the abuse of anabolic steroids only allows the detection of molecules with known analytical properties. In our supplementary approach to structure-independent detection, substances are identified by their biological activity. In the present study, urines excreted after oral methyltestosterone (MT) administration were analyzed by a yeast androgen screen (YAS). The aim was to trace the excretion of MT or its metabolites in human urine samples and to compare the results with those from the established analytical method. MT and its two major metabolites were tested as pure compounds in the YAS. In a second step, the ability of the YAS to detect MT and its metabolites in urine samples was analyzed. For this purpose, a human volunteer ingested of a single dose of 5 mg methyltestosterone. Urine samples were collected after different time intervals (0-307 h) and were analyzed in the YAS and in parallel by GC/MS. Whereas the YAS was able to trace MT in urine samples at least for 14 days, the detection limits of the GC/MS method allowed follow-up until day six. In conclusion, our results demonstrate that the yeast reporter gene system could detect the activity of anabolic steroids like methyltestosterone with high sensitivity even in urine. Furthermore, the YAS was able to detect MT abuse for a longer period of time than classical GC/MS. Obviously, the system responds to long-lasting metabolites yet unidentified. Therefore, the YAS can be a powerful (pre-) screening tool with the potential that to be used to identify persistent or late screening metabolites of anabolic steroids, which could be used for an enhancement of the sensitivity of GC/MS detection techniques.
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
Bergquist, J; Vona, M J; Stiller, C O; O'Connor, W T; Falkenberg, T; Ekman, R
1996-03-01
The use of capillary electrophoresis with laser-induced fluorescence detection (CE-LIF) for the analysis of microdialysate samples from the periaqueductal grey matter (PAG) of freely moving rats is described. By employing 3-(4-carboxybenzoyl)-2-quinoline-carboxaldehyde (CBQCA) as a derivatization agent, we simultaneously monitored the concentrations of 8 amino acids (arginine, glutamine, valine, gamma-amino-n-butyric acid (GABA), alanine, glycine, glutamate, and aspartate), with nanomolar and subnanomolar detection limits. Two of the amino acids (GABA and glutamate) were analysed in parallel by conventional high-performance liquid chromatography (HPLC) in order to directly compare the two analytical methods. Other CE methods for analysis of microdialysate have been previously described, and this improved method offers greater sensitivity, ease of use, and the possibility to monitor several amino acids simultaneously. By using this technique together with an optimised form of microdialysis technique, the tiny sample consumption and the improved detection limits permit the detection of fast and transient transmitter changes.
An Analysis of Machine- and Human-Analytics in Classification.
Tam, Gary K L; Kothari, Vivek; Chen, Min
2017-01-01
In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.
A new method for multi-bit and qudit transfer based on commensurate waveguide arrays
NASA Astrophysics Data System (ADS)
Petrovic, J.; Veerman, J. J. P.
2018-05-01
The faithful state transfer is an important requirement in the construction of classical and quantum computers. While the high-speed transfer is realized by optical-fibre interconnects, its implementation in integrated optical circuits is affected by cross-talk. The cross-talk between densely packed optical waveguides limits the transfer fidelity and distorts the signal in each channel, thus severely impeding the parallel transfer of states such as classical registers, multiple qubits and qudits. Here, we leverage on the suitably engineered cross-talk between waveguides to achieve the parallel transfer on optical chip. Waveguide coupling coefficients are designed to yield commensurate eigenvalues of the array and hence, periodic revivals of the input state. While, in general, polynomially complex, the inverse eigenvalue problem permits analytic solutions for small number of waveguides. We present exact solutions for arrays of up to nine waveguides and use them to design realistic buses for multi-(qu)bit and qudit transfer. Advantages and limitations of the proposed solution are discussed in the context of available fabrication techniques.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
Aperture-based antihydrogen gravity experiment: Parallel plate geometry
NASA Astrophysics Data System (ADS)
Rocha, J. R.; Hedlof, R. M.; Ordonez, C. A.
2013-10-01
An analytical model and a Monte Carlo simulation are presented of an experiment that could be used to determine the direction of the acceleration of antihydrogen due to gravity. The experiment would rely on methods developed by existing antihydrogen research collaborations. The configuration consists of two circular, parallel plates that have an axis of symmetry directed away from the center of the earth. The plates are separated by a small vertical distance, and include one or more pairs of circular barriers that protrude from the upper and lower plates, thereby forming an aperture between the plates. Antihydrogen annihilations that occur just beyond each barrier, within a "shadow" region, are asymmetric on the upper plate relative to the lower plate. The probability for such annihilations is determined for a point, line and spheroidal source of antihydrogen. The production of 100,000 antiatoms is predicted to be necessary for the aperture-based experiment to indicate the direction of free fall acceleration of antimatter, provided that antihydrogen is produced within a sufficiently small antiproton plasma at a temperature of 4 K.
NASA Astrophysics Data System (ADS)
Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju
2018-02-01
A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, J.C.; Kuhlman, M.R.; Hannan, S.W.
1987-11-01
The objective of this project was to evaluate a potential collection medium, XAD-4 resin, for collecting nicotine and polynuclear aromatic hydrocarbon (PAH) and to determine whether one collection system and one analytical method will allow quantification of both compound classes in air. The extraction efficiency study was to determine the extraction method to quantitatively remove nicotine and PAH from XAD-4 resin. The results showed that a two-step Soxhlet extraction consisting of dichloromethane followed by ethyl acetate resulted in the best recoveries for both nicotine and PAH. In the sampling efficiency study, XAD-2 and XAD-4 resin were compared, in parallel, formore » collection of PAH and nicotine. Quartz fiber filters were placed upstream of both adsorbents to collect particles. Prior to sampling, both XAD-2 and XAD-4 traps were spiked with known amounts (2 microgram) of perdeuterated PAH and D3-nicotine. The experiments were performed with cigarette smoking and nonsmoking conditions. The spiked PAH were retained well in both adsorbents after exposure to more than 300 cu. m. of indoor air. The spiked XAD-4 resin gave higher recoveries for D3-nicotine than did the spiked XAD-2 resin. The collection efficiency for PAH for both adsorbents is very similar but higher levels of nicotine were collected on XAD-4 resin.« less
Rothenhöfer, Martin; Scherübl, Rosmarie; Bernhardt, Günther; Heilmann, Jörg; Buschauer, Armin
2012-07-27
Purified oligomers of hyalobiuronic acid are indispensable tools to elucidate the physiological and pathophysiological role of hyaluronan degradation by various hyaluronidase isoenzymes. Therefore, we established and validated a novel sensitive, convenient, rapid, and cost-effective high performance thin layer chromatography (HPTLC) method for the qualitative and quantitative analysis of small saturated hyaluronan oligosaccharides consisting of 2-4 hyalobiuronic acid moieties. The use of amino-modified silica as stationary phase allows a simple reagent-free in situ derivatization by heating, resulting in a very low limit of detection (7-19 pmol per band, depending on the analyzed saturated oligosaccharide). By this derivatization procedure for the first time densitometric quantification of the analytes could be performed by HPTLC. The validated method showed a quantification limit of 37-71 pmol per band and was proven to be superior in comparison to conventional detection of hyaluronan oligosaccharides. The analytes were identified by hyphenation of normal phase planar chromatography to mass spectrometry (TLC-MS) using electrospray ionization. As an alternative to sequential techniques such as high performance liquid chromatography (HPLC) and capillary electrophoresis (CE), the validated HPTLC quantification method can easily be automated and is applicable to the analysis of multiple samples in parallel. Copyright © 2012 Elsevier B.V. All rights reserved.
Wang, Li; Zhang, Zhujun; Huang, Lianggao
2008-03-01
A new molecularly imprinted polymer (MIP)-chemiluminescence (CL) imaging detection approach towards chiral recognition of dansyl-phenylalanine (Phe) is presented. The polymer microspheres were synthesized using precipitation polymerization with dansyl-L-Phe as template. Polymer microspheres were immobilized in microtiter plates (96 wells) using poly(vinyl alcohol) (PVA) as glue. The analyte was selectively adsorbed on the MIP microspheres. After washing, the bound fraction was quantified based on peroxyoxalate chemiluminescence (PO-CL) analysis. In the presence of dansyl-Phe, bis(2,4,6-trichlorophenyl)oxalate (TCPO) reacted with hydrogen peroxide (H2O2) to emit chemiluminescence. The signal was detected and quantified with a highly sensitive cooled charge-coupled device (CCD). Influencing factors were investigated and optimized in detail. Control experiments using capillary electrophoresis showed that there was no significant difference between the proposed method and the control method at a confidence level of 95%. The method can perform 96 independent measurements simultaneously in 30 min and the limits of detection (LODs) for dansyl-L-Phe and dansyl-D-Phe were 0.025 micromol L(-1) and 0.075 micromol L(-1) (3sigma), respectively. The relative standard deviation (RSD) for 11 parallel measurements of dansyl-L-Phe (0.78 micromol L(-1)) was 8%. The results show that MIP-based CL imaging can become a useful analytical technology for quick chiral recognition.
Noyes, Aaron; Huffman, Ben; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Sunasara, Khurram; Mukhopadhyay, Tarit
2015-08-01
The biotech industry is under increasing pressure to decrease both time to market and development costs. Simultaneously, regulators are expecting increased process understanding. High throughput process development (HTPD) employs small volumes, parallel processing, and high throughput analytics to reduce development costs and speed the development of novel therapeutics. As such, HTPD is increasingly viewed as integral to improving developmental productivity and deepening process understanding. Particle conditioning steps such as precipitation and flocculation may be used to aid the recovery and purification of biological products. In this first part of two articles, we describe an ultra scale-down system (USD) for high throughput particle conditioning (HTPC) composed of off-the-shelf components. The apparatus is comprised of a temperature-controlled microplate with magnetically driven stirrers and integrated with a Tecan liquid handling robot. With this system, 96 individual reaction conditions can be evaluated in parallel, including downstream centrifugal clarification. A comprehensive suite of high throughput analytics enables measurement of product titer, product quality, impurity clearance, clarification efficiency, and particle characterization. HTPC at the 1 mL scale was evaluated with fermentation broth containing a vaccine polysaccharide. The response profile was compared with the Pilot-scale performance of a non-geometrically similar, 3 L reactor. An engineering characterization of the reactors and scale-up context examines theoretical considerations for comparing this USD system with larger scale stirred reactors. In the second paper, we will explore application of this system to industrially relevant vaccines and test different scale-up heuristics. © 2015 Wiley Periodicals, Inc.
Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation
NASA Technical Reports Server (NTRS)
Kouatchou, Jules; Halem, Milton (Technical Monitor)
2000-01-01
We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.
Towards a parallel collisionless shock in LAPD
NASA Astrophysics Data System (ADS)
Weidl, M. S.; Heuer, P.; Schaeffer, D.; Dorst, R.; Winske, D.; Constantin, C.; Niemann, C.
2017-09-01
Using a high-energy laser to produce a super-Alfvénic carbon-ion beam in a strongly magnetized helium plasma, we expect to be able to observe the formation of a collisionless parallel shock inside the Large Plasma Device. We compare early magnetic-field measurements of the resonant right-hand instability with analytical predictions and find excellent agreement. Hybrid simulations show that the carbon ions couple to the background plasma and compress it, although so far the background ions are mainly accelerated perpendicular to the mean-field direction.
Unstructured grids on SIMD torus machines
NASA Technical Reports Server (NTRS)
Bjorstad, Petter E.; Schreiber, Robert
1994-01-01
Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.
Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems
NASA Astrophysics Data System (ADS)
Endo, Eishin; Toga, Yuta; Sasaki, Munetaka
2015-07-01
We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.
Gas diffusion as a new fluidic unit operation for centrifugal microfluidic platforms.
Ymbern, Oriol; Sández, Natàlia; Calvo-López, Antonio; Puyol, Mar; Alonso-Chamarro, Julian
2014-03-07
A centrifugal microfluidic platform prototype with an integrated membrane for gas diffusion is presented for the first time. The centrifugal platform allows multiple and parallel analysis on a single disk and integrates at least ten independent microfluidic subunits, which allow both calibration and sample determination. It is constructed with a polymeric substrate material and it is designed to perform colorimetric determinations by the use of a simple miniaturized optical detection system. The determination of three different analytes, sulfur dioxide, nitrite and carbon dioxide, is carried out as a proof of concept of a versatile microfluidic system for the determination of analytes which involve a gas diffusion separation step during the analytical procedure.
Watanabe, Kyoko; Varesio, Emmanuel; Hopfgartner, Gérard
2014-08-15
An assay was developed and validated for the quantification of eight protease inhibitors (indinavir (IDV), ritonavir (RTV), lopinavir (LPV), saquinavir (SQV), amprenavir (APV), nelfinavir (NFV), atazanavir (AZV) and darunavir (DRV)) in dried plasma spots using parallel ultra-high performance liquid chromatography and mass spectrometry detection in the multiple reaction monitoring mode. For each analyte an isotopically labeled internal standard was used and the assay based on liquid-solid extraction the area response ratio (analyte/IS) was found to be linear; from 0.025 μg/ml to 20 μg/ml for IDV, SQV, DRV, AZV, LPV, from 0.025 μg/ml to 10 μg/ml for NFV, APV and from 0.025 μg/ml to 5 μg/ml for RTV using 15 μl of plasma spotted on filter paper placed in a sample tube. The total analysis time was of 4 min and inter-assay accuracies and precisions were in the range of 87.7-109% and 2.5-11.8%, respectively. On dried plasma spots all analytes were found to be stable for at least 7 days. Practicability of the assay to blood was also demonstrated. The sample drying process could be reduced to 5 min using a commercial microwave system without any analyte degradation. Together with quantification, confirmatory analysis was performed on representative clinical samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Quasineutral plasma expansion into infinite vacuum as a model for parallel ELM transport
NASA Astrophysics Data System (ADS)
Moulton, D.; Ghendrih, Ph; Fundamenski, W.; Manfredi, G.; Tskhakaya, D.
2013-08-01
An analytic solution for the expansion of a plasma into vacuum is assessed for its relevance to the parallel transport of edge localized mode (ELM) filaments along field lines. This solution solves the 1D1V Vlasov-Poisson equations for the adiabatic (instantaneous source), collisionless expansion of a Gaussian plasma bunch into an infinite space in the quasineutral limit. The quasineutral assumption is found to hold as long as λD0/σ0 ≲ 0.01 (where λD0 is the initial Debye length at peak density and σ0 is the parallel length of the Gaussian filament), a condition that is physically realistic. The inclusion of a boundary at x = L and consequent formation of a target sheath is found to have a negligible effect when L/σ0 ≳ 5, a condition that is physically plausible. Under the same condition, the target flux densities predicted by the analytic solution are well approximated by the ‘free-streaming’ equations used in previous experimental studies, strengthening the notion that these simple equations are physically reasonable. Importantly, the analytic solution predicts a zero heat flux density so that a fluid approach to the problem can be used equally well, at least when the source is instantaneous. It is found that, even for JET-like pedestal parameters, collisions can affect the expansion dynamics via electron temperature isotropization, although this is probably a secondary effect. Finally, the effect of a finite duration, τsrc, for the plasma source is investigated. As is found for an instantaneous source, when L/σ0 ≳ 5 the presence of a target sheath has a negligible effect, at least up to the explored range of τsrc = L/cs (where cs is the sound speed at the initial temperature).
SWMM5 Application Programming Interface and PySWMM: A Python Interfacing Wrapper
In support of the OpenWaterAnalytics open source initiative, the PySWMM project encompasses the development of a Python interfacing wrapper to SWMM5 with parallel ongoing development of the USEPA Stormwater Management Model (SWMM5) application programming interface (API). ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghorbanalilu, M.; Physics Department, Azarbaijan Shahid Madani University, Tabriz; Sadegzadeh, S.
2014-05-15
The existence of Weibel instability for a streaming electron, counterstreaming electron-electron (e-e), and electron-positron (e-p) plasmas with intrinsic temperature anisotropy is investigated. The temperature anisotropy is included in the directions perpendicular and parallel to the streaming direction. It is shown that the beam mean speed changes the instability mode, for a streaming electron beam, from the classic Weibel to the Weibel-like mode. The analytical and numerical solutions approved that Weibel-like modes are excited for both counterstreaming e-e and e-p plasmas. The growth rates of the instabilities in e-e and e-p plasmas are compared. The growth rate is larger for e-pmore » plasmas if the thermal anisotropy is small and the opposite is true for large thermal anisotropies. The analytical and numerical solutions are in good agreement only in the small parallel temperature and wave number limits, when the instability growth rate increases linearly with normalized wave number kc∕ω{sub p}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor
A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less
Decision-making under risk conditions is susceptible to interference by a secondary executive task.
Starcke, Katrin; Pawlikowski, Mirko; Wolf, Oliver T; Altstötter-Gleich, Christine; Brand, Matthias
2011-05-01
Recent research suggests two ways of making decisions: an intuitive and an analytical one. The current study examines whether a secondary executive task interferes with advantageous decision-making in the Game of Dice Task (GDT), a decision-making task with explicit and stable rules that taps executive functioning. One group of participants performed the original GDT solely, two groups performed either the GDT and a 1-back or a 2-back working memory task as a secondary task simultaneously. Results show that the group which performed the GDT and the secondary task with high executive load (2-back) decided less advantageously than the group which did not perform a secondary executive task. These findings give further evidence for the view that decision-making under risky conditions taps into the rational-analytical system which acts in a serial and not parallel way as performance on the GDT is disturbed by a parallel task that also requires executive resources.
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Macro Scale Independently Homogenized Subcells for Modeling Braided Composites
NASA Technical Reports Server (NTRS)
Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.
2012-01-01
An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.
Van Oudenhove, Lukas; Cuypers, Stefaan E
2010-01-01
Parallel to psychiatry, "philosophy of mind" investigates the relationship between mind (mental domain) and body/brain (physical domain). Unlike older forms of philosophy of mind, contemporary analytical philosophy is not exclusively based on introspection and conceptual analysis, but also draws upon the empirical methods and findings of the sciences. This article outlines the conceptual framework of the "mind-body problem" as formulated in contemporary analytical philosophy and argues that this philosophical debate has potentially far-reaching implications for psychiatry as a clinical-scientific discipline, especially for its own autonomy and its relationship to neurology/neuroscience. This point is illustrated by a conceptual analysis of the five principles formulated in Kandel's 1998 article "A New Intellectual Framework for Psychiatry." Kandel's position in the philosophical mind-body debate is ambiguous, ranging from reductive physicalism (psychophysical identity theory) to non-reductive physicalism (in which the mental "supervenes" on the physical) to epiphenomenalist dualism or even emergent dualism. We illustrate how these diverging interpretations result in radically different views on the identity of psychiatry and its relationship with the rapidly expanding domain of neurology/neuroscience.
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Comparison of soil pollution concentrations determined using AAS and portable XRF techniques.
Radu, Tanja; Diamond, Dermot
2009-11-15
Past mining activities in the area of Silvermines, Ireland, have resulted in heavily polluted soils. The possibility of spreading pollution to the surrounding areas through dust blow-offs poses a potential threat for the local communities. Conventional environmental soil and dust analysis techniques are very slow and laborious and consequently there is a need for fast and accurate analytical methods, which can provide real-time in situ pollution mapping. Laboratory-based aqua regia acid digestion of the soil samples collected in the area followed by the atomic absorption spectrophotometry (AAS) analysis confirmed very high pollution, especially by Pb, As, Cu, and Zn. In parallel, samples were analyzed using portable X-ray fluorescence radioisotope and miniature tube powered (XRF) NITON instruments and their performance was compared. Overall, the portable XRF instrument gave excellent correlation with the laboratory-based reference AAS method.
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
The effect of anisotropic heat transport on magnetic islands in 3-D configurations
NASA Astrophysics Data System (ADS)
Schlutt, M. G.; Hegna, C. C.
2012-08-01
An analytic theory of nonlinear pressure-induced magnetic island formation using a boundary layer analysis is presented. This theory extends previous work by including the effects of finite parallel heat transport and is applicable to general three dimensional magnetic configurations. In this work, particular attention is paid to the role of finite parallel heat conduction in the context of pressure-induced island physics. It is found that localized currents that require self-consistent deformation of the pressure profile, such as resistive interchange and bootstrap currents, are attenuated by finite parallel heat conduction when the magnetic islands are sufficiently small. However, these anisotropic effects do not change saturated island widths caused by Pfirsch-Schlüter current effects. Implications for finite pressure-induced island healing are discussed.
Segmental Refinement: A Multigrid Technique for Data Locality
Adams, Mark F.; Brown, Jed; Knepley, Matt; ...
2016-08-04
In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less
NASA Technical Reports Server (NTRS)
Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry
1997-01-01
This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.
NASA Astrophysics Data System (ADS)
Molokov, S. Y.; Allen, J. E.
Magnetohydrodynamic (MHD) flows of viscous incompressible fluid in strong magnetic fields parallel to a free surface of fluid are investigated. The problem of flow in an open channel due to a moving side wall in uniform magnetic field is considered, and treated by means of matched asymptotic expansions method. The flow region is divided into various subregions and leading terms of asymptotic expansions as M tends towards infinity (M is the Hartmann number) of solutions of correspondent problems in each subregion are obtained. An exact analytic solution of equations governing the free-surface layer of thickness of order M to the minus 1/2 power is obtained.
A Novel Way To Practice Slope.
ERIC Educational Resources Information Center
Kennedy, Jane B.
1997-01-01
Presents examples of using a tic-tac-toe format to practice finding the slope and identifying parallel and perpendicular lines from various equation formats. Reports the successful use of this format as a review in both precalculus and calculus classes before students work with applications of analytic geometry. (JRH)
A coupled mode formulation by reciprocity and a variational principle
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.
NASA Astrophysics Data System (ADS)
Gerke, Kirill M.; Vasilyev, Roman V.; Khirevich, Siarhei; Collins, Daniel; Karsanina, Marina V.; Sizonenko, Timofey O.; Korost, Dmitry V.; Lamontagne, Sébastien; Mallants, Dirk
2018-05-01
Permeability is one of the fundamental properties of porous media and is required for large-scale Darcian fluid flow and mass transport models. Whilst permeability can be measured directly at a range of scales, there are increasing opportunities to evaluate permeability from pore-scale fluid flow simulations. We introduce the free software Finite-Difference Method Stokes Solver (FDMSS) that solves Stokes equation using a finite-difference method (FDM) directly on voxelized 3D pore geometries (i.e. without meshing). Based on explicit convergence studies, validation on sphere packings with analytically known permeabilities, and comparison against lattice-Boltzmann and other published FDM studies, we conclude that FDMSS provides a computationally efficient and accurate basis for single-phase pore-scale flow simulations. By implementing an efficient parallelization and code optimization scheme, permeability inferences can now be made from 3D images of up to 109 voxels using modern desktop computers. Case studies demonstrate the broad applicability of the FDMSS software for both natural and artificial porous media.
Abdulmawjood, Amir; Grabowski, Nils; Fohler, Svenja; Kittler, Sophie; Nagengast, Helga; Klein, Guenter
2014-01-01
Animal species identification is one of the primary duties of official food control. Since ostrich meat is difficult to be differentiated macroscopically from beef, therefore new analytical methods are needed. To enforce labeling regulations for the authentication of ostrich meat, it might be of importance to develop and evaluate a rapid and reliable assay. In the present study, a loop-mediated isothermal amplification (LAMP) assay based on the cytochrome b gene of the mitochondrial DNA of the species Struthio camelus was developed. The LAMP assay was used in combination with a real-time fluorometer. The developed system allowed the detection of 0.01% ostrich meat products. In parallel, a direct swab method without nucleic acid extraction using the HYPLEX LPTV buffer was also evaluated. This rapid processing method allowed detection of ostrich meat without major incubation steps. In summary, the LAMP assay had excellent sensitivity and specificity for detecting ostrich meat and could provide a sampling-to-result identification-time of 15 to 20 minutes. PMID:24963709
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Machine learning for Big Data analytics in plants.
Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng
2014-12-01
Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
NASA Astrophysics Data System (ADS)
Krishna, M. Veera; Swarnalathamma, B. V.
2017-07-01
We considered the transient MHD flow of a reactive second grade fluid through porous medium between two infinitely long horizontal parallel plates when one of the plate is set into uniform accelerated motion in the presence of a uniform transverse magnetic field under Arrhenius reaction rate. The governing equations are solved by Laplace transform technique. The effects of the pertinent parameters on the velocity, temperature are discussed in detail. The shear stress and Nusselt number at the plates are also obtained analytically and computationally discussed with reference to governing parameters.
Lunar electromagnetic scattering. 1: Propagation parallel to the diamagnetic cavity axis
NASA Technical Reports Server (NTRS)
Schwartz, K.; Schubert, G.
1972-01-01
An analytic theory is developed for the time dependent magnetic fields inside the Moon and the diamagnetic cavity when the interplanetary electromagnetic field fluctuation propagates parallel to the cavity axis. The Moon model has an electrical conductivity which is an arbitrary function of radius. The lunar cavity is modelled by a nonconducting cylinder extending infinitely far downstream. For frequencies less than about 50 Hz, the cavity is a cylindrical waveguide below cutoff. Thus, cavity field perturbations due to the Moon do not propagate down the cavity, but are instead attenuated with distance downstream from the Moon.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
Developmental changes in analytic and holistic processes in face perception.
Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.
An analytic description of electrodynamic dispersion in free-flow zone electrophoresis.
Dutta, Debashis
2015-07-24
The present work analyzes the electrodynamic dispersion of sample streams in a free-flow zone electrophoresis (FFZE) chamber resulting due to partial or complete blockage of electroosmotic flow (EOF) across the channel width by the sidewalls of the conduit. This blockage of EOF has been assumed to generate a pressure-driven backflow in the transverse direction for maintaining flow balance in the system. A parallel-plate based FFZE device with the analyte stream located far away from the channel side regions has been considered to simplify the current analysis. Applying a method-of-moments formulation, an analytic expression was derived for the variance of the sample zone at steady state as a function of its position in the separation chamber under these conditions. It has been shown that the increase in stream broadening due to the electrodynamic dispersion phenomenon is additive to the contributions from molecular diffusion and sample injection, and simply modifies the coefficient for the hydrodynamic dispersion term for a fixed lateral migration distance of the sample stream. Moreover, this dispersion mechanism can dominate the overall spatial variance of analyte zones when a significant fraction of the EOF is blocked by the channel sidewalls. The analysis also shows that analyte streams do not undergo any hydrodynamic broadening due to unwanted pressure-driven cross-flows in an FFZE chamber in the absence of a transverse electric field. The noted results have been validated using Monte Carlo simulations which further demonstrate that while the sample concentration profile at the channel outlet approaches a Gaussian distribution only in FFZE chambers substantially longer than the product of the axial pressure-driven velocity and the characteristic diffusion time in the system, the spatial variance of the exiting analyte stream is well described by the Taylor-Aris dispersion limit even in analysis ducts much shorter than this length scale. Copyright © 2015 Elsevier B.V. All rights reserved.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Composing Data Parallel Code for a SPARQL Graph Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste
Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less
Kang, Soyoung; Oh, Seung Min; Chung, Kyu Hyuck; Lee, Sooyeun
2014-09-01
γ-Hydroxybutyrate (GHB) is a drug of abuse with a strong anesthetic effect; however, proving its ingestion through the quantification of GHB in biological specimens is not straightforward due to the endogenous presence of GHB in human blood, urine, saliva, etc. In the present study, a surrogate analyte approach was applied to accurate quantitative determination of GHB in human urine using liquid chromatography-tandem mass spectrometry (LC-MS/MS) in order to overcome this issue. For this, (2)H6-GHB and (13)C2-dl-3-hydroxybutyrate were used as a surrogate standard and as an internal standard, respectively, and parallelism between the surrogate analyte approach and standard addition was investigated at the initial step. The validation results proved the method to be selective, accurate, and precise, with acceptable linearity within calibration ranges (0.1-1μg/ml). The limit of detection and the limit of quantification of (2)H6-GHB were 0.05 and 0.1μg/ml, respectively. No significant variations were observed among urine matrices from different sources. The stability of (2)H6-GHB was satisfactory under sample storage and in-process conditions. However, in vitro production of endogenous GHB was observed when the urine sample was kept under the in-process condition for 4h and under the storage conditions of 4 and -20°C. In order to facilitate the practical interpretation of urinary GHB, endogenous GHB was accurately measured in urine samples from 79 healthy volunteers using the surrogate analyte-based LC-MS/MS method developed in the present study. The unadjusted and creatinine-adjusted GHB concentrations in 74 urine samples with quantitative results ranged from 0.09 to 1.8μg/ml and from 4.5 to 530μg/mmol creatinine, respectively. No significant correlation was observed between the unadjusted and creatinine-adjusted GHB concentrations. The urinary endogenous GHB concentrations were affected by gender and age while they were not significantly influenced by habitual smoking, alcohol drinking, or caffeine-containing beverage drinking. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Y.; Gui, Z.; Wu, H.; Wei, Y.
2017-09-01
Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.
Spin–orbit DFT with Analytic Gradients and Applications to Heavy Element Compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhiyong
We have implemented the unrestricted DFT approach with one-electron spin–orbit operators in the massively parallel NWChem program. Also implemented is the analytic gradient in the DFT approach with spin–orbit interactions. The current capabilities include single-point calculations and geometry optimization. Vibrational frequencies can be calculated numerically from the analytically calculated gradients. The implementation is based on the spin–orbit interaction operator derived from the effective core potential approach. The exchange functionals used in the implementation are functionals derived for non-spin–orbit calculations, including GGA as well as hybrid functionals. Spin–orbit Hartree–Fock calculations can also be carried out. We have applied the spin–orbit DFTmore » methods to the Uranyl aqua complexes. We have optimized the structures and calculated the vibrational frequencies of both (UO2 2+)aq and (UO2 +)aq with and without spin–orbit effects. The effects of the spin–orbit interaction on the structures and frequencies of these two complexes are discussed. We also carried out calculations for Th2, and several low-lying electronic states are calculated. Our results indicate that, for open-shell systems, there are significant effects due to the spin–orbit effects and the electronic configurations with and without spin–orbit interactions could change due to the occupation of orbitals of larger spin–orbit interactions.« less
NASA Astrophysics Data System (ADS)
Khechiba, Khaled; Mamou, Mahmoud; Hachemi, Madjid; Delenda, Nassim; Rebhi, Redha
2017-06-01
The present study is focused on Lapwood convection in isotropic porous media saturated with non-Newtonian shear thinning fluid. The non-Newtonian rheological behavior of the fluid is modeled using the general viscosity model of Carreau-Yasuda. The convection configuration consists of a shallow porous cavity with a finite aspect ratio and subject to a vertical constant heat flux, whereas the vertical walls are maintained impermeable and adiabatic. An approximate analytical solution is developed on the basis of the parallel flow assumption, and numerical solutions are obtained by solving the full governing equations. The Darcy model with the Boussinesq approximation and energy transport equations are solved numerically using a finite difference method. The results are obtained in terms of the Nusselt number and the flow fields as functions of the governing parameters. A good agreement is obtained between the analytical approximation and the numerical solution of the full governing equations. The effects of the rheological parameters of the Carreau-Yasuda fluid and Rayleigh number on the onset of subcritical convection thresholds are demonstrated. Regardless of the aspect ratio of the enclosure and thermal boundary condition type, the subcritical convective flows are seen to occur below the onset of stationary convection. Correlations are proposed to estimate the subcritical Rayleigh number for the onset of finite amplitude convection as a function of the fluid rheological parameters. Linear stability of the convective motion, predicted by the parallel flow approximation, is studied, and the onset of Hopf bifurcation, from steady convective flow to oscillatory behavior, is found to depend strongly on the rheological parameters. In general, Hopf bifurcation is triggered earlier as the fluid becomes more and more shear-thinning.
Parallelization of the FLAPW method and comparison with the PPW method
NASA Astrophysics Data System (ADS)
Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur
2000-03-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.
Torbati, Mohammadali; Farajzadeh, Mir Ali; Torbati, Mostafa; Nabil, Ali Akbar Alizadeh; Mohebbi, Ali; Afshar Mogaddam, Mohammad Reza
2018-01-01
A new microextraction method named salt and pH-induced homogeneous liquid-liquid microextraction has been developed in a home-made extraction device for the extraction and preconcentration of some pyrethroid insecticides from different fruit juice samples prior to gas chromatography-mass spectrometry. In the present work, an extraction device made from two parallel glass tubes with different lengths and diameters was used in the microextraction procedure. In this method, a homogeneous solution of a sample solution and an extraction solvent (pivalic acid) was broken by performing an acid-base reaction and the extraction solvent was produced in whole of the solution. The produced droplets of the extraction solvent went up through the solution and solidified using an ice-bath. They were collected without centrifugation step. Under the optimum conditions, limits of detection and quantification were obtained in the ranges of 0.006-0.038, and 0.023-0.134ngmL -1 , respectively. The enrichment factors and extraction recoveries of the selected analytes ranged from 365-460 to 73-92%, respectively. The relative standard deviations were lower than 9% for intra- (n = 6) and inter-day (n = 4) precisions at a concentration of 1ngmL -1 of each analyte. Finally, some fruit juice samples were effectively analyzed by the proposed method. Copyright © 2017 Elsevier B.V. All rights reserved.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
NASA Astrophysics Data System (ADS)
Markou, A. A.; Manolis, G. D.
2018-03-01
Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project) against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark's time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.
Agapiou, A; Zorba, E; Mikedi, K; McGregor, L; Spiliopoulou, C; Statheropoulos, M
2015-07-09
Field experiments were devised to mimic the entrapment conditions under the rubble of collapsed buildings aiming to investigate the evolution of volatile organic compounds (VOCs) during the early dead body decomposition stage. Three pig carcasses were placed inside concrete tunnels of a search and rescue (SAR) operational field terrain for simulating the entrapment environment after a building collapse. The experimental campaign employed both laboratory and on-site analytical methods running in parallel. The current work focuses only on the results of the laboratory method using thermal desorption coupled to comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry (TD-GC×GC-TOF MS). The flow-modulated TD-GC×GC-TOF MS provided enhanced separation of the VOC profile and served as a reference method for the evaluation of the on-site analytical methods in the current experimental campaign. Bespoke software was used to deconvolve the VOC profile to extract as much information as possible into peak lists. In total, 288 unique VOCs were identified (i.e., not found in blank samples). The majority were aliphatics (172), aromatics (25) and nitrogen compounds (19), followed by ketones (17), esters (13), alcohols (12), aldehydes (11), sulfur (9), miscellaneous (8) and acid compounds (2). The TD-GC×GC-TOF MS proved to be a sensitive and powerful system for resolving the chemical puzzle of above-ground "scent of death". Copyright © 2015 Elsevier B.V. All rights reserved.
Microfluidic Platform for Parallel Single Cell Analysis for Diagnostic Applications.
Le Gac, Séverine
2017-01-01
Cell populations are heterogeneous: they can comprise different cell types or even cells at different stages of the cell cycle and/or of biological processes. Furthermore, molecular processes taking place in cells are stochastic in nature. Therefore, cellular analysis must be brought down to the single cell level to get useful insight into biological processes, and to access essential molecular information that would be lost when using a cell population analysis approach. Furthermore, to fully characterize a cell population, ideally, information both at the single cell level and on the whole cell population is required, which calls for analyzing each individual cell in a population in a parallel manner. This single cell level analysis approach is particularly important for diagnostic applications to unravel molecular perturbations at the onset of a disease, to identify biomarkers, and for personalized medicine, not only because of the heterogeneity of the cell sample, but also due to the availability of a reduced amount of cells, or even unique cells. This chapter presents a versatile platform meant for the parallel analysis of individual cells, with a particular focus on diagnostic applications and the analysis of cancer cells. We first describe one essential step of this parallel single cell analysis protocol, which is the trapping of individual cells in dedicated structures. Following this, we report different steps of a whole analytical process, including on-chip cell staining and imaging, cell membrane permeabilization and/or lysis using either chemical or physical means, and retrieval of the cell molecular content in dedicated channels for further analysis. This series of experiments illustrates the versatility of the herein-presented platform and its suitability for various analysis schemes and different analytical purposes.
Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.
To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Numerical Analysis of Dusty-Gas Flows
NASA Astrophysics Data System (ADS)
Saito, T.
2002-02-01
This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Sopov, V. I.
2017-10-01
In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.
Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control
Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.
2018-03-26
To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less
Radiative instabilities in sheared magnetic field
NASA Technical Reports Server (NTRS)
Drake, J. F.; Sparks, L.; Van Hoven, G.
1988-01-01
The structure and growth rate of the radiative instability in a sheared magnetic field B have been calculated analytically using the Braginskii fluid equations. In a shear layer, temperature and density perturbations are linked by the propagation of sound waves parallel to the local magnetic field. As a consequence, density clumping or condensation plays an important role in driving the instability. Parallel thermal conduction localizes the mode to a narrow layer where K(parallel) is small and stabilizes short wavelengths k larger-than(c) where k(c) depends on the local radiation and conduction rates. Thermal coupling to ions also limits the width of the unstable spectrum. It is shown that a broad spectrum of modes is typically unstable in tokamak edge plasmas and it is argued that this instability is sufficiently robust to drive the large-amplitude density fluctuations often measured there.
Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.
Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K
2018-06-05
Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.
Integrated electronics for time-resolved array of single-photon avalanche diodes
NASA Astrophysics Data System (ADS)
Acconcia, G.; Crotti, M.; Rech, I.; Ghioni, M.
2013-12-01
The Time Correlated Single Photon Counting (TCSPC) technique has reached a prominent position among analytical methods employed in a great variety of fields, from medicine and biology (fluorescence spectroscopy) to telemetry (laser ranging) and communication (quantum cryptography). Nevertheless the development of TCSPC acquisition systems featuring both a high number of parallel channels and very high performance is still an open challenge: to satisfy the tight requirements set by the applications, a fully parallel acquisition system requires not only high efficiency single photon detectors but also a read-out electronics specifically designed to obtain the highest performance in conjunction with these sensors. To this aim three main blocks have been designed: a gigahertz bandwidth front-end stage to directly read the custom technology SPAD array avalanche current, a reconfigurable logic to route the detectors output signals to the acquisition chain and an array of time measurement circuits capable of recording the photon arrival times with picoseconds time resolution and a very high linearity. An innovative architecture based on these three circuits will feature a very high number of detectors to perform a truly parallel spatial or spectral analysis and a smaller number of high performance time-to-amplitude converter offering very high performance and a very high conversion frequency while limiting the area occupation and power dissipation. The routing logic will make the dynamic connection between the two arrays possible in order to guarantee that no information gets lost.
Soliton interactions and complexes for coupled nonlinear Schrödinger equations.
Jiang, Yan; Tian, Bo; Liu, Wen-Jun; Sun, Kun; Li, Min; Wang, Pan
2012-03-01
Under investigation in this paper are the coupled nonlinear Schrödinger (CNLS) equations, which can be used to govern the optical-soliton propagation and interaction in such optical media as the multimode fibers, fiber arrays, and birefringent fibers. By taking the 3-CNLS equations as an example for the N-CNLS ones (N≥3), we derive the analytic mixed-type two- and three-soliton solutions in more general forms than those obtained in the previous studies with the Hirota method and symbolic computation. With the choice of parameters for those soliton solutions, soliton interactions and complexes are investigated through the asymptotic and graphic analysis. Soliton interactions and complexes with the bound dark solitons in a mode or two modes are observed, including that (i) the two bright solitons display the breatherlike structures while the two dark ones stay parallel, (ii) the two bright and dark solitons all stay parallel, and (iii) the states of the bound solitons change from the breatherlike structures to the parallel one even with the distance between those solitons smaller than that before the interaction with the regular one soliton. Asymptotic analysis is also used to investigate the elastic and inelastic interactions between the bound solitons and the regular one soliton. Furthermore, some discussions are extended to the N-CNLS equations (N>3). Our results might be helpful in such applications as the soliton switch, optical computing, and soliton amplification in the nonlinear optics.
CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2015-04-01
We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
Bernstein, Joseph; Kupperman, Eli; Kandel, Leonid Ari; Ahn, Jaimo
2016-07-01
Through shared decision making, the physician and patient exchange information to arrive at an agreement about the patient's preferred treatment. This process is predicated on the assumption that there is a single preferred treatment, and the goal of the dialog is to discover it. In contrast, psychology theory (ie, prospect theory) suggests that people can make decisions both analytically and intuitively through parallel decision-making processes, and depending on how the choice is framed, the two processes may not agree. Thus, patients may not have a single preferred treatment, but rather separate intuitive and analytic preferences. The research question addressed here is whether subjects might reveal different therapeutic preferences based on how a decision is framed. Five clinical scenarios on the management of tibial plateau fractures were constructed. Healthy volunteers were asked to select among treatments offered. Four weeks later, the scenarios were presented again; the facts of the scenario were unchanged, but the description was altered to test the null hypothesis that minor changes in wording would not lead the subjects to change their decision about treatment. For example, incomplete improvement after surgery was described first as a gain from the preoperative state and then as a loss from the preinjury state. In all five cases, the variation predicted by psychology theory was detected. Respondents were affected by whether choices were framed as avoided losses versus potential gains; by emotional cues; by choices reported by others (ie, bandwagon effect); by the answers proposed to them in the question (ie, anchors); and by seemingly irrelevant options (ie, decoys). The influence of presentation on preferences can be highly significant in orthopaedic surgery. The presence of parallel decision-making processes implies that the standard methods of obtaining informed consent may require further refinement. Furthermore, if the way that information is portrayed makes surgery more or less appealing, the use of services may be subject to unwanted influence. If surgery were accepted preoperatively by the patient's intuitive process but evaluated after the fact by the analytic process (or vice versa), well-indicated and well-performed surgery may still fail to provide patient satisfaction.
Myra, James R.; D'Ippolito, Daniel A.; Russell, David A.; ...
2016-04-11
Sheared flows perpendicular to the magnetic field can be driven by the Reynolds stress or ion pressure gradient effects and can potentially influence the stability and turbulent saturation level of edge plasma modes. On the other hand, such flows are subject to the transverse Kelvin- Helmholtz (KH) instability. Here, the linear theory of KH instabilities is first addressed with an analytic model in the asymptotic limit of long wavelengths compared with the flow scale length. The analytic model treats sheared ExB flows, ion diamagnetism (including gyro-viscous terms), density gradients and parallel currents in a slab geometry, enabling a unified summarymore » that encompasses and extends previous results. In particular, while ion diamagnetism, density gradients and parallel currents each individually reduce KH growth rates, the combined effect of density and ion pressure gradients is more complicated and partially counteracting. Secondly, the important role of realistic toroidal geometry is explored numerically using an invariant scaling analysis together with the 2DX eigenvalue code to examine KH modes in both closed and open field line regions. For a typical spherical torus magnetic geometry, it is found that KH modes are more unstable at and just outside the separatrix as a result of the distribution of magnetic shear. Lastly implications for reduced edge turbulence modeling codes are discussed.« less
Discontinuous Galerkin Finite Element Method for Parabolic Problems
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
Reveal Listeria 2.0 test for detection of Listeria spp. in foods and environmental samples.
Alles, Susan; Curry, Stephanie; Almy, David; Jagadeesan, Balamurugan; Rice, Jennifer; Mozola, Mark
2012-01-01
A Performance Tested Method validation study was conducted for a new lateral flow immunoassay (Reveal Listeria 2.0) for detection of Listeria spp. in foods and environmental samples. Results of inclusivity testing showed that the test detects all species of Listeria, with the exception of L. grayi. In exclusivity testing conducted under nonselective growth conditions, all non-listeriae tested produced negative Reveal assay results, except for three strains of Lactobacillus spp. However, these lactobacilli are inhibited by the selective Listeria Enrichment Single Step broth enrichment medium used with the Reveal method. Six foods were tested in parallel by the Reveal method and the U.S. Food and Drug Administration/Bacteriological Analytical Manual (FDA/BAM) reference culture procedure. Considering data from both internal and independent laboratory trials, overall sensitivity of the Reveal method relative to that of the FDA/BAM procedure was 101%. Four foods were tested in parallel by the Reveal method and the U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) reference culture procedure. Overall sensitivity of the Reveal method relative to that of the USDA-FSIS procedure was 98.2%. There were no statistically significant differences in the number of positives obtained by the Reveal and reference culture procedures in any food trials. In testing of swab or sponge samples from four types of environmental surfaces, sensitivity of Reveal relative to that of the USDA-FSIS reference culture procedure was 127%. For two surface types, differences in the number of positives obtained by the Reveal and reference methods were statistically significant, with more positives by the Reveal method in both cases. Specificity of the Reveal assay was 100%, as there were no unconfirmed positive results obtained in any phase of the testing. Results of ruggedness experiments showed that the Reveal assay is tolerant of modest deviations in test sample volume and device incubation time.
Epilepsy analytic system with cloud computing.
Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei
2013-01-01
Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
Locatelli, Marcello; Kabir, Abuzar; Innosa, Denise; Lopatriello, Teresa; Furton, Kenneth G
2017-01-01
This paper reports a novel fabric phase sorptive extraction-high performance liquid chromatography-photodiode array detection (FPSE-HPLC-PDA) method for the simultaneous extraction and analysis of twelve azole antimicrobial drug residues that include ketoconazole, terconazole, voriconazole, bifonazole, clotrimazole, tioconazole, econazole, butoconazole, miconazole, posaconazole, ravuconazole, and itraconazole in human plasma and urine samples. The selected azole antimicrobial drugs were well resolved by using a Luna C 18 column (250mm×4.6mm; 5μm particle size) in gradient elution mode within 36min. The analytical method was calibrated and validated in the range from 0.1 to 8μg/mL for all the drug compounds. Blank human plasma and urine were used as the sample matrix for the analysis; while benzyl-4-hydroxybenzoate was used as the internal standard (IS). The limit of quantification of the FPSE-HPLC-PDA method was found as 0.1μg/mL and the weighted-matrix matched standard calibration curves of the drugs showed a good linearity upto a concentration of 8μg/mL. The parallelism tests were also performed to evaluate whether overrange sample can be analyzed after dilution, without compromising the analytical performances of the validated method. The intra- and inter-day precision (RSD%) values were found ≤13.1% and ≤13.9%, respectively. The intra- and inter-day trueness (bias%) values were found in the range from -12.1% to 10.5%. The performances of the validated FPSE-HPLC-PDA were further tested on real samples collected from healthy volunteers after a single dose administration of itraconazole and miconazole. To the best of our knowledge, this is the first FPSE extraction procedure applied on plasma and urine samples for the simultaneous determination of twelve azole drugs possessing a wide range of logK ow values (extending from 0.4 for fluconazole to 6.70 of butoconazole) and could be adopted as a rapid and robust green analytical tool for clinical and pharmaceutical applications. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Averkin, Sergey N.; Gatsonis, Nikolaos A.
2018-06-01
An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization efficiency investigation. Results show that the EUPIC has efficiency of more than 80% when the simulation is performed on a single CPU from a non-uniform memory access node and the efficiency is decreasing as the number of threads further increases. The EUPIC is applied to the simulation of the multi-species plasma flow over a geometrically complex CubeSat in Low Earth Orbit. The EUPIC potential and flowfield distribution around the CubeSat exhibit features that are consistent with previous simulations over simpler geometrical bodies.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Parallelization of the FLAPW method
NASA Astrophysics Data System (ADS)
Canning, A.; Mannstadt, W.; Freeman, A. J.
2000-08-01
The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining structural, electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work, we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel supercomputer.
A Feedback Model for Data-Rich Learning Experiences
ERIC Educational Resources Information Center
Pardo, Abelardo
2018-01-01
Feedback has been identified as one of the factors with the largest potential for a positive impact in a learning experience. There is a significant body of knowledge studying feedback and providing guidelines for its implementation in learning environments. In parallel, the areas of learning analytics or educational data mining have emerged to…
Parallel mass spectrometry (APCI-MS and ESI-MS) for lipid analysis
USDA-ARS?s Scientific Manuscript database
Coupling the condensed phase of HPLC with the high vacuum necessary for ion analysis in a mass spectrometer requires quickly evaporating large amounts of liquid mobile phase to release analyte molecules into the gas phase, along with ionization of those molecules, so they can be detected by the mass...
ERIC Educational Resources Information Center
McIlvane, William J.
2009-01-01
Throughout its history, laboratory research in the experimental analysis of behavior has been successful in elucidating and clarifying basic learning principles and processes in both humans and nonhumans. In parallel, applied behavior analysis has shown how fundamental behavior-analytic principles and procedures can be employed to promote…
CSM parallel structural methods research
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1989-01-01
Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.
Reaction Force of Micro-scale Liquid Droplets Constrained Between Parallel Plates through CFD
NASA Astrophysics Data System (ADS)
Free, Robert; Hekiri, Haider; Hawa, Takumi
2012-02-01
Micro-scale liquid droplets responding to depression between parallel plates are investigated analytically and numerically. The functional dependence of the reaction force accrued in such droplets on droplet size, surface tension, depression amount, and contact angle is explored. For both the 2D and 3D case, an analytical model is developed based on first principles. Computational fluid dynamics is then utilized to evaluate the validity of these models. The reaction force is highly nonlinear, initially increasing very slowly with increasing depression of the droplet, but eventually moving asymptotically to infinity. The force scales linearly with both the droplet free radius and surface tension of the liquid, but has a much more complicated dependence on the contact angle and depression. Explicit expressions for the reaction force have been determined, showing these dependencies. The 3D model has been largely supported by the CFD results. It very accurately predicts the reaction force on the upper plate as the droplet is crushed, accounting for the effect of contact angle, surface tension, and droplet size.
Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph
2016-08-01
Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor
2016-02-24
A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less
NASA Astrophysics Data System (ADS)
Ward, Thomas
2017-11-01
The radial squeezing and de-wetting of a thin film of viscous shear thinning fluid filling the gap between parallel plane walls is examined both experimentally and theoretically for gap spacing much smaller than the capillary length. The interaction between motion of fluid in the gap driven by squeezing or de-wetting and surface tension is parameterized by a dimensionless variable, F, that is the ratio of the constant force supplied by the top plate (either positive or negative) to surface tension at the drop's circumference. Furthermore, the dimensionless form of the rate equation for the gap's motion reveals a time scale that is dependent on the drop volume when analyzed for a power law shear thinning fluid. In the de-wetting problem the analytical solution reveals the formation of a singularity, leading to capillary adhesion, as the gap spacing approaches a critical value that depends on F and the contact angle. Experiments are performed to test the analytical predictions for both squeezing, and de-wetting in the vicinity of the singularity.
NASA Astrophysics Data System (ADS)
Liu, Lei; Wang, Xu
2017-12-01
Three-dimensional analytical solutions are derived for the structural instability of a parallel array of mutually attracting identical simply supported orthotropic piezoelectric rectangular microplates by means of a linear perturbation analysis. The two surfaces of each plate can be either insulating or conducting. By considering the fact that the shear stresses and the normal electric displacement (or electric potential) are zero on the two surfaces of each plate, a 2 × 2 transfer matrix for a plate can be obtained directly from the 8 × 8 fundamental piezoelectricity matrix without resolving the original Stroh eigenrelation. The critical interaction coefficient can be determined by solving the resulting generalized eigenvalue problem for the piezoelectric plate array. Also considered in our analysis is the in-plane uniform edge compression acting on the four sides of each piezoelectric plate. Our results indicate that the stabilizing influence of the piezoelectric effect on the structural instability is unignorable; the edge compression always plays a destabilizing role in the structural instability of the plate array with interactions.
NASA Technical Reports Server (NTRS)
Halford, Gary R.
1993-01-01
The evolution of high-temperature, creep-fatigue, life-prediction methods used for cyclic crack initiation is traced from inception in the late 1940's. The methods reviewed are material models as opposed to structural life prediction models. Material life models are used by both structural durability analysts and by material scientists. The latter use micromechanistic models as guidance to improve a material's crack initiation resistance. Nearly one hundred approaches and their variations have been proposed to date. This proliferation poses a problem in deciding which method is most appropriate for a given application. Approaches were identified as being combinations of thirteen different classifications. This review is intended to aid both developers and users of high-temperature fatigue life prediction methods by providing a background from which choices can be made. The need for high-temperature, fatigue-life prediction methods followed immediately on the heels of the development of large, costly, high-technology industrial and aerospace equipment immediately following the second world war. Major advances were made in the design and manufacture of high-temperature, high-pressure boilers and steam turbines, nuclear reactors, high-temperature forming dies, high-performance poppet valves, aeronautical gas turbine engines, reusable rocket engines, etc. These advances could no longer be accomplished simply by trial and error using the 'build-em and bust-em' approach. Development lead times were too great and costs too prohibitive to retain such an approach. Analytic assessments of anticipated performance, cost, and durability were introduced to cut costs and shorten lead times. The analytic tools were quite primitive at first and out of necessity evolved in parallel with hardware development. After forty years more descriptive, more accurate, and more efficient analytic tools are being developed. These include thermal-structural finite element and boundary element analyses, advanced constitutive stress-strain-temperature-time relations, and creep-fatigue-environmental models for crack initiation and propagation. The high-temperature durability methods that have evolved for calculating high-temperature fatigue crack initiation lives of structural engineering materials are addressed. Only a few of the methods were refined to the point of being directly useable in design. Recently, two of the methods were transcribed into computer software for use with personal computers.
NASA Astrophysics Data System (ADS)
Halford, Gary R.
1993-10-01
The evolution of high-temperature, creep-fatigue, life-prediction methods used for cyclic crack initiation is traced from inception in the late 1940's. The methods reviewed are material models as opposed to structural life prediction models. Material life models are used by both structural durability analysts and by material scientists. The latter use micromechanistic models as guidance to improve a material's crack initiation resistance. Nearly one hundred approaches and their variations have been proposed to date. This proliferation poses a problem in deciding which method is most appropriate for a given application. Approaches were identified as being combinations of thirteen different classifications. This review is intended to aid both developers and users of high-temperature fatigue life prediction methods by providing a background from which choices can be made. The need for high-temperature, fatigue-life prediction methods followed immediately on the heels of the development of large, costly, high-technology industrial and aerospace equipment immediately following the second world war. Major advances were made in the design and manufacture of high-temperature, high-pressure boilers and steam turbines, nuclear reactors, high-temperature forming dies, high-performance poppet valves, aeronautical gas turbine engines, reusable rocket engines, etc. These advances could no longer be accomplished simply by trial and error using the 'build-em and bust-em' approach. Development lead times were too great and costs too prohibitive to retain such an approach. Analytic assessments of anticipated performance, cost, and durability were introduced to cut costs and shorten lead times. The analytic tools were quite primitive at first and out of necessity evolved in parallel with hardware development. After forty years more descriptive, more accurate, and more efficient analytic tools are being developed. These include thermal-structural finite element and boundary element analyses, advanced constitutive stress-strain-temperature-time relations, and creep-fatigue-environmental models for crack initiation and propagation. The high-temperature durability methods that have evolved for calculating high-temperature fatigue crack initiation lives of structural engineering materials are addressed. Only a few of the methods were refined to the point of being directly useable in design.
Dynamical control of the emission of a square microlaser via symmetry classes
NASA Astrophysics Data System (ADS)
Bittner, S.; Loirette-Pelous, A.; Lafargue, C.; Gozhyk, I.; Ulysse, C.; Dietz, B.; Zyss, J.; Lebental, M.
2018-04-01
A major objective in photonics is to tailor the emission properties of microcavities which is usually achieved with specific cavity shapes. Yet the dynamical change of the emission properties during operation would often be advantageous. The implementation of such a method is still a challenging issue. We present an effective procedure for the dynamical control of the emission lobes which relies on the selection of a specific coherent superposition of degenerate modes belonging to different symmetry classes. It is generally applicable to systems exhibiting pairs of degenerate modes. We explored it experimentally and analytically with organic square microlasers, which emit narrow lobes parallel to their sidewalls. By means of the pump polarization, emission lobes are switched on and off selectively with an extinction ratio better than 1 /50 .
Molecular beam mass spectrometer development
NASA Technical Reports Server (NTRS)
Brock, F. J.; Hueser, J. E.
1976-01-01
An analytical model, based on the kinetics theory of a drifting Maxwellian gas is used to determine the nonequilibrium molecular density distribution within a hemispherical shell open aft with its axis parallel to its velocity. The concept of a molecular shield in terrestrial orbit above 200 km is also analyzed using the kinetic theory of a drifting Maxwellian gas. Data are presented for the components of the gas density within the shield due to the free stream atmosphere, outgassing from the shield and enclosed experiments, and atmospheric gas scattered off a shield orbiter system. A description is given of a FORTRAN program for computating the three dimensional transition flow regime past the space shuttle orbiter that employs the Monte Carlo simulation method to model real flow by some thousands of simulated molecules.
Synthesis of full Poincaré beams by means of uniaxial crystals
NASA Astrophysics Data System (ADS)
Piquero, G.; Monroy, L.; Santarsiero, M.; Alonzo, M.; de Sande, J. C. G.
2018-06-01
A simple optical system is proposed to generate full-Poincaré beams (FPBs), i.e. beams presenting all possible states of (total) polarization across their transverse section. The method consists in focusing a uniformly polarized laser beam onto a uniaxial crystal having its optic axis parallel to the propagation axis of the impinging beam. A simple approximated model is used to obtain the analytical expression of the beam polarization at the output of the crystal. The output beam is then proved to be a FPB. By changing the polarization state of the input field, full-Poincaré beams are still obtained, but presenting different distributions of the polarization state across the beam section. Experimental results are reported, showing an excellent agreement with the theoretical predictions.
[Personalized urooncology based on molecular uropathology: what is the future?].
Dahl, E; Haller, F
2013-07-01
Targeted therapies and biomarker validation are key drivers in the advancement of personalized oncology which is a growing topic in all clinical areas. Compared with other professions, such as pulmonology and gynecology, development in urology has so far been retarded but has recently gained increasing momentum. A basis for this is the currently growing and in future accelerated application of new knowledge derived from molecular biology in the field of uropathology. The rapid gain of knowledge is driven by a whole new class of analytical methods, such as massively parallel sequencing (deep sequencing or next generation sequencing), which enables analysis of virtually a new universe of potential biomarkers. This article describes the emerging paradigm shift in molecular pathological diagnostics of urological tumors using the example of prostate cancer.
A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1997-01-01
This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.
Advanced study of video signal processing in low signal to noise environments
NASA Technical Reports Server (NTRS)
Carden, F.; Henry, R.
1972-01-01
A nonlinear analysis of a multifilter phase-lockloop (MPLL) by using the method of harmonic balance is presented. The particular MPLL considered has a low-pass filter and a band-pass filter in parallel. An analytic expression for the relationship between the input signal phase deviation and the phase error is determined for sinusoidal FM in the absence of noise. The expression is used to determine bounds on the proper operating region for the MPLL and to investigate the jump phenomenon previously observed. From these results the proper modulation index, modulating frequency, etc. used for the design of a MPLL are determined. Data for the loop unlock boundary obtained from the theoretical expression are compared to data obtained from analog computer simulations of the MPLL.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Centrifugal acceleration of ions in the polar magnetosphere
NASA Technical Reports Server (NTRS)
Swinney, Kenneth R.; Horwitz, James L.; Delcourt, D.
1987-01-01
The transport of ionospheric ions originating near the dayside cusp into the magnetotail is parametrically studied using a 3-D model of ion trajectories. It is shown that the centrifugal term in the guiding center parallel force equation dominates the parallel motion after about 4 Re geocentric distance. The dependence of the equatorial crossing distance on initial latitude, energy and convection electric field is presented for ions originating on the dayside ionosphere in the noon-midnight plane. It is also found that up to altitudes of about 5 Re, the motion is similar to that of a bead on a rotating rod, for which a simple analytical solution exists.
Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain
2011-03-01
Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haugen, Carl C.; Forget, Benoit; Smith, Kord S.
Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less
Robandt, P V; Klette, K L; Sibum, M
2009-10-01
An automated solid-phase extraction coupled with liquid chromatography and tandem mass spectrometry (SPE-LC-MS-MS) method for the analysis of 11-nor-Delta(9)-tetrahydrocannabinol-9-carboxylic acid (THC-COOH) in human urine specimens was developed. The method was linear (R(2) = 0.9986) to 1000 ng/mL with no carryover evidenced at 2000 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision was evaluated at the 15 ng/mL level over nine batches spanning 15 days (n = 45). The coefficient of variation (%CV) was found to be 5.5% over the course of the validation. Intrarun precision of a 15 ng/mL control (n = 5) ranged from 0.58% CV to 7.4% CV for the same set of analytical batches. Interference was tested using (+/-)-11-hydroxy-Delta(9)-tetrahydrocannabinol, cannabidiol, (-)-Delta(8)-tetrahydrocannabinol, and cannabinol. One hundred and nineteen specimens previously found to contain THC-COOH by a previously validated gas chromatographic mass spectrometry (GC-MS) procedure were compared to the SPE-LC-MS-MS method. Excellent agreement was found (R(2) = 0.9925) for the parallel comparison study. The automated SPE procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. Additionally, method runtime is greatly reduced (e.g., during parallel studies the SPE-LC-MS-MS instrument was often finished with analysis by the time the technician finished the offline SPE and derivatization procedure prior to the GC-MS analysis).
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Mladic, Marija; de Waal, Tessa; Burggraaff, Lindsey; Slagboom, Julien; Somsen, Govert W; Niessen, Wilfried M A; Manjunatha Kini, R; Kool, Jeroen
2017-10-01
This study presents an analytical method for the screening of snake venoms for inhibitors of the angiotensin-converting enzyme (ACE) and a strategy for their rapid identification. The method is based on an at-line nanofractionation approach, which combines liquid chromatography (LC), mass spectrometry (MS), and pharmacology in one platform. After initial LC separation of a crude venom, a post-column flow split is introduced enabling parallel MS identification and high-resolution fractionation onto 384-well plates. The plates are subsequently freeze-dried and used in a fluorescence-based ACE activity assay to determine the ability of the nanofractions to inhibit ACE activity. Once the bioactive wells are identified, the parallel MS data reveals the masses corresponding to the activities found. Narrowing down of possible bioactive candidates is provided by comparison of bioactivity profiles after reversed-phase liquid chromatography (RPLC) and after hydrophilic interaction chromatography (HILIC) of a crude venom. Additional nanoLC-MS/MS analysis is performed on the content of the bioactive nanofractions to determine peptide sequences. The method described was optimized, evaluated, and successfully applied for screening of 30 snake venoms for the presence of ACE inhibitors. As a result, two new bioactive peptides were identified: pELWPRPHVPP in Crotalus viridis viridis venom with IC 50 = 1.1 μM and pEWPPWPPRPPIPP in Cerastes cerastes cerastes venom with IC 50 = 3.5 μM. The identified peptides possess a high sequence similarity to other bradykinin-potentiating peptides (BPPs), which are known ACE inhibitors found in snake venoms.
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
Developmental changes in analytic and holistic processes in face perception
Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838
Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Detrixhe, Miles; Gibou, Frédéric
2016-10-01
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.
A Generic analytical solution for modelling pumping tests in wells intersecting fractures
NASA Astrophysics Data System (ADS)
Dewandel, Benoît; Lanini, Sandra; Lachassagne, Patrick; Maréchal, Jean-Christophe
2018-04-01
The behaviour of transient flow due to pumping in fractured rocks has been studied for at least the past 80 years. Analytical solutions were proposed for solving the issue of a well intersecting and pumping from one vertical, horizontal or inclined fracture in homogeneous aquifers, but their domain of application-even if covering various fracture geometries-was restricted to isotropic or anisotropic aquifers, whose potential boundaries had to be parallel or orthogonal to the fracture direction. The issue thus remains unsolved for many field cases. For example, a well intersecting and pumping a fracture in a multilayer or a dual-porosity aquifer, where intersected fractures are not necessarily parallel or orthogonal to aquifer boundaries, where several fractures with various orientations intersect the well, or the effect of pumping not only in fractures, but also in the aquifer through the screened interval of the well. Using a mathematical demonstration, we show that integrating the well-known Theis analytical solution (Theis, 1935) along the fracture axis is identical to the equally well-known analytical solution of Gringarten et al. (1974) for a uniform-flux fracture fully penetrating a homogeneous aquifer. This result implies that any existing line- or point-source solution can be used for implementing one or more discrete fractures that are intersected by the well. Several theoretical examples are presented and discussed: a single vertical fracture in a dual-porosity aquifer or in a multi-layer system (with a partially intersecting fracture); one and two inclined fractures in a leaky-aquifer system with pumping either only from the fracture(s), or also from the aquifer between fracture(s) in the screened interval of the well. For the cases with several pumping sources, analytical solutions of flowrate contribution from each individual source (fractures and well) are presented, and the drawdown behaviour according to the length of the pumped screened interval of the well is discussed. Other advantages of this proposed generic analytical solution are also given. The application of this solution to field data should provide additional field information on fracture geometry, as well as identifying the connectivity between the pumped fractures and other aquifers.
Big data analytics workflow management for eScience
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Palazzo, Cosimo; Elia, Donatello; Mariello, Andrea; Nassisi, Paola; Aloisio, Giovanni
2015-04-01
In many domains such as climate and astrophysics, scientific data is often n-dimensional and requires tools that support specialized data types and primitives if it is to be properly stored, accessed, analysed and visualized. Currently, scientific data analytics relies on domain-specific software and libraries providing a huge set of operators and functionalities. However, most of these software fail at large scale since they: (i) are desktop based, rely on local computing capabilities and need the data locally; (ii) cannot benefit from available multicore/parallel machines since they are based on sequential codes; (iii) do not provide declarative languages to express scientific data analysis tasks, and (iv) do not provide newer or more scalable storage models to better support the data multidimensionality. Additionally, most of them: (v) are domain-specific, which also means they support a limited set of data formats, and (vi) do not provide a workflow support, to enable the construction, execution and monitoring of more complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides several parallel operators to manipulate large datasets. Some relevant examples include: (i) data sub-setting (slicing and dicing), (ii) data aggregation, (iii) array-based primitives (the same operator applies to all the implemented UDF extensions), (iv) data cube duplication, (v) data cube pivoting, (vi) NetCDF-import and export. Metadata operators are available too. Additionally, the Ophidia framework provides array-based primitives to perform data sub-setting, data aggregation (i.e. max, min, avg), array concatenation, algebraic expressions and predicate evaluation on large arrays of scientific data. Bit-oriented plugins have also been implemented to manage binary data cubes. Defining processing chains and workflows with tens, hundreds of data analytics operators is the real challenge in many practical scientific use cases. This talk will specifically address the main needs, requirements and challenges regarding data analytics workflow management applied to large scientific datasets. Three real use cases concerning analytics workflows for sea situational awareness, fire danger prevention, climate change and biodiversity will be discussed in detail.
NASA Astrophysics Data System (ADS)
Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.
2015-12-01
A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data layers based on specific conditions (e.g analyze flooding risk of a property based on topography, soil ability to hold water, and forecasted precipitation) or retrieve information about locations that share similar weather and vegetation patterns during extreme weather events like heat wave.
Bernevic, Bogdan; El-Khatib, Ahmed H; Jakubowski, Norbert; Weller, Michael G
2018-04-02
The human copper-protein ceruloplasmin (Cp) is the major copper-containing protein in the human body. The accurate determination of Cp is mandatory for the reliable diagnosis of several diseases. However, the analysis of Cp has proven to be difficult. The aim of our work was a proof of concept for the determination of a metalloprotein-based on online immunocapture ICP-MS. The immuno-affinity step is responsible for the enrichment and isolation of the analyte from serum, whereas the compound-independent quantitation with ICP-MS delivers the sensitivity, precision, and large dynamic range. Off-line ELISA (enzyme-linked immunosorbent assay) was used in parallel to confirm the elution profile of the analyte with a structure-selective method. The total protein elution was observed with the 32 S mass trace. The ICP-MS signals were normalized on a 59 Co signal. The human copper-protein Cp could be selectively determined. This was shown with pure Cp and with a sample of human serum. The good correlation with off-line ELISA shows that Cp could be captured and eluted selectively from the anti-Cp affinity column and subsequently determined by the copper signal of ICP-MS.
Linear ground-water flow, flood-wave response program for programmable calculators
Kernodle, John Michael
1978-01-01
Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Blinzler, Brina J.; Binienda, Wieslaw K.
2010-01-01
A macro level finite element-based model has been developed to simulate the mechanical and impact response of triaxially-braided polymer matrix composites. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. For the current analytical approach, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. The constitutive model requires stiffness and strength properties of an equivalent unidirectional composite. Simplified micromechanics methods are used to determine the equivalent stiffness properties, and results from coupon level tests on the braided composite are utilized to back out the required strength properties. Simulations of quasi-static coupon tests of several representative braided composites are conducted to demonstrate the correlation of the model. Impact simulations of a represented braided composites are conducted to demonstrate the capability of the model to predict the penetration velocity and damage patterns obtained experimentally.
Tiered Approach to Resilience Assessment.
Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David
2018-04-25
Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.
Kinematic validation of a quasi-geostrophic model for the fast dynamics in the Earth's outer core
NASA Astrophysics Data System (ADS)
Maffei, S.; Jackson, A.
2017-09-01
We derive a quasi-geostrophic (QG) system of equations suitable for the description of the Earth's core dynamics on interannual to decadal timescales. Over these timescales, rotation is assumed to be the dominant force and fluid motions are strongly invariant along the direction parallel to the rotation axis. The diffusion-free, QG system derived here is similar to the one derived in Canet et al. but the projection of the governing equations on the equatorial disc is handled via vertical integration and mass conservation is applied to the velocity field. Here we carefully analyse the properties of the resulting equations and we validate them neglecting the action of the Lorentz force in the momentum equation. We derive a novel analytical solution describing the evolution of the magnetic field under these assumptions in the presence of a purely azimuthal flow and an alternative formulation that allows us to numerically solve the evolution equations with a finite element method. The excellent agreement we found with the analytical solution proves that numerical integration of the QG system is possible and that it preserves important physical properties of the magnetic field. Implementation of magnetic diffusion is also briefly considered.
Naldi, Marina; Baldassarre, Maurizio; Domenicali, Marco; Giannone, Ferdinando Antonino; Bossi, Matteo; Montomoli, Jonathan; Sandahl, Thomas Damgaard; Glavind, Emilie; Vilstrup, Hendrik; Caraceni, Paolo; Bertucci, Carlo
2016-04-15
Human serum albumin (HSA) is the most abundant plasma protein, endowed with several biological properties unrelated to its oncotic power, such as antioxidant and free-radicals scavenging activities, binding and transport of many endogenous and exogenous substances, and regulation of endothelial function and inflammatory response. These non-oncotic activities are closely connected to the peculiarly dynamic structure of the albumin molecule. HSA undergoes spontaneous structural modifications, mainly by reaction with oxidants and saccharides; however, patients with cirrhosis show extensive post-transcriptional changes at several molecular sites of HSA, the degree of which parallels the severity of the disease. The present work reports the development and application of an innovative LC-MS analytical method for a rapid and reproducible determination of the relative abundance of HSA isoforms in plasma samples from alcoholic hepatitis (AH) patients. A condition of severe oxidative stress, similar to that observed in AH patients, is associated with profound changes in circulating HSA microheterogeneity. More interestingly, the high resolution provided by the analytical platform allowed the monitoring of novel oxidative products of HSA never reported before. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xia, Yidong
The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as a computer program. By using an AD tool, the manpower can be significantly reduced for deriving the flux Jacobians, which can be quite complicated, tedious, and error-prone if done by hand or symbolic arithmetic software, depending on the complexity of the numerical flux scheme. In addition, the workload for code maintenance can also be largely reduced in case the underlying flux scheme is updated. The approximate system of linear equations arising from the Newton linearization is solved by the general minimum residual (GMRES) algorithm with lower-upper symmetric gauss-seidel (LUSGS) preconditioning. This GMRES+LU-SGS linear solver is the most robust and efficient for implicit time integration of the discretized Navier-Stokes equations when the AD-based flux Jacobians are provided other than the other two approaches. The developed HWENO(P1P2) method is used to compute a variety of well-documented compressible inviscid and viscous flow test cases on 3D hybrid grids, including some standard benchmark test cases such as the Sod shock tube, flow past a circular cylinder, and laminar flow past a at plate. The computed solutions are compared with either analytical solutions or experimental data, if available to assess the accuracy of the HWENO(P 1P2) method. Numerical results demonstrate that the HWENO(P 1P2) method is able to not only enhance the accuracy of the underlying HWENO(P1) method, but also ensure the linear and non-linear stability at the presence of strong discontinuities. An extensive study of grid convergence analysis on various types of elements: tetrahedron, prism, hexahedron, and hybrid prism/hexahedron, for a number of test cases indicates that the developed HWENO(P1P2) method is able to achieve the designed third-order accuracy of spatial convergence for smooth inviscid flows: one order higher than the underlying second-order DG(P1) method without significant increase in computing costs and storage requirements. The performance of the the developed GMRES+LU-SGS implicit method is compared with the multi-stage Runge-Kutta time stepping scheme for a number of test cases in terms of the timestep and CPU time. Numerical results indicate that the overall performance of the implicit method with AD-based Jacobians is order of magnitude better than the its explicit counterpart. Finally, a set of parallel scaling tests for both explicit and implicit methods is conducted on North Carolina State University's ARC cluster, demonstrating almost an ideal scalability of the RDG method. (Abstract shortened by UMI.)
Blob dynamics in TORPEX poloidal null configurations
NASA Astrophysics Data System (ADS)
Shanahan, B. W.; Dudson, B. D.
2016-12-01
3D blob dynamics are simulated in X-point magnetic configurations in the TORPEX device via a non-field-aligned coordinate system, using an isothermal model which evolves density, vorticity, parallel velocity and parallel current density. By modifying the parallel gradient operator to include perpendicular perturbations from poloidal field coils, numerical singularities associated with field aligned coordinates are avoided. A comparison with a previously developed analytical model (Avino 2016 Phys. Rev. Lett. 116 105001) is performed and an agreement is found with minimal modification. Experimental comparison determines that the null region can cause an acceleration of filaments due to increasing connection length, but this acceleration is small relative to other effects, which we quantify. Experimental measurements (Avino 2016 Phys. Rev. Lett. 116 105001) are reproduced, and the dominant acceleration mechanism is identified as that of a developing dipole in a moving background. Contributions from increasing connection length close to the null point are a small correction.
Inflated speedups in parallel simulations via malloc()
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.
Numerical Test of Analytical Theories for Perpendicular Diffusion in Small Kubo Number Turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heusen, M.; Shalchi, A., E-mail: husseinm@myumanitoba.ca, E-mail: andreasm4@yahoo.com
In the literature, one can find various analytical theories for perpendicular diffusion of energetic particles interacting with magnetic turbulence. Besides quasi-linear theory, there are different versions of the nonlinear guiding center (NLGC) theory and the unified nonlinear transport (UNLT) theory. For turbulence with high Kubo numbers, such as two-dimensional turbulence or noisy reduced magnetohydrodynamic turbulence, the aforementioned nonlinear theories provide similar results. For slab and small Kubo number turbulence, however, this is not the case. In the current paper, we compare different linear and nonlinear theories with each other and test-particle simulations for a noisy slab model corresponding to smallmore » Kubo number turbulence. We show that UNLT theory agrees very well with all performed test-particle simulations. In the limit of long parallel mean free paths, the perpendicular mean free path approaches asymptotically the quasi-linear limit as predicted by the UNLT theory. For short parallel mean free paths we find a Rechester and Rosenbluth type of scaling as predicted by UNLT theory as well. The original NLGC theory disagrees with all performed simulations regardless what the parallel mean free path is. The random ballistic interpretation of the NLGC theory agrees much better with the simulations, but compared to UNLT theory the agreement is inferior. We conclude that for this type of small Kubo number turbulence, only the latter theory allows for an accurate description of perpendicular diffusion.« less
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z J
2012-12-06
The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less
Muik, Barbara; Edelmann, Andrea; Lendl, Bernhard; Ayora-Cañada, María José
2002-09-01
An automated method for measuring the primary amino acid concentration in wine fermentations by sequential injection analysis with spectrophotometric detection was developed. Isoindole-derivatives from the primary amino acid were formed by reaction with o-phthaldialdehyde and N-acetyl- L-cysteine and measured at 334 nm with respect to a baseline point at 700 nm to compensate the observed Schlieren effect. As the reaction kinetic was strongly matrix dependent the analytical readout at the final reaction equilibrium has been evaluated. Therefore four parallel reaction coils were included in the flow system to be capable of processing four samples simultaneously. Using isoleucine as the representative primary amino acid in wine fermentations a linear calibration curve from 2 to 10 mM isoleucine, corresponding to 28 to 140 mg nitrogen/L (N/L) was obtained. The coefficient of variation of the method was 1.5% at a throughput of 12 samples per hour. The developed method was successfully used to monitor two wine fermentations during alcoholic fermentation. The results were in agreement with an external reference method based on high performance liquid chromatography. A mean-t-test showed no significant differences between the two methods at a confidence level of 95%.
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
True, Lawrence D
2014-03-01
Paralleling the growth of ever more cost efficient methods to sequence the whole genome in minute fragments of tissue has been the identification of increasingly numerous molecular abnormalities in cancers--mutations, amplifications, insertions and deletions of genes, and patterns of differential gene expression, i.e., overexpression of growth factors and underexpression of tumor suppressor genes. These abnormalities can be translated into assays to be used in clinical decision making. In general terms, the result of such an assay is subject to a large number of variables regarding the characteristics of the available sample, particularities of the used assay, and the interpretation of the results. This review discusses the effects of these variables on assays of tissue-based biomarkers, classified by macromolecule--DNA, RNA (including micro RNA, messenger RNA, long noncoding RNA, protein, and phosphoprotein). Since the majority of clinically applicable biomarkers are immunohistochemically detectable proteins this review focuses on protein biomarkers. However, the principles outlined are mostly applicable to any other analyte. A variety of preanalytical variables impacts on the results obtained, including analyte stability (which is different for different analytes, i.e., DNA, RNA, or protein), period of warm and of cold ischemia, fixation time, tissue processing, sample storage time, and storage conditions. In addition, assay variables play an important role, including reagent specificity (notably but not uniquely an issue concerning antibodies used in immunohistochemistry), technical components of the assay, quantitation, and assay interpretation. Finally, appropriateness of an assay for clinical application is an important issue. Reference is made to publicly available guidelines to improve on biomarker development in general and requirements for clinical use in particular. Strategic goals are formulated in order to improve on the quality of biomarker reporting, including issues of analyte quality, experimental detail, assay efficiency and precision, and assay appropriateness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less
Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-09-02
This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less
Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.
Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E
2016-01-19
Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.
FOLDER: A numerical tool to simulate the development of structures in layered media
NASA Astrophysics Data System (ADS)
Adamuszek, Marta; Dabrowski, Marcin; Schmid, Daniel W.
2015-04-01
FOLDER is a numerical toolbox for modelling deformation in layered media during layer parallel shortening or extension in two dimensions. FOLDER builds on MILAMIN [1], a finite element method based mechanical solver, with a range of utilities included from the MUTILS package [2]. Numerical mesh is generated using the Triangle software [3]. The toolbox includes features that allow for: 1) designing complex structures such as multi-layer stacks, 2) accurately simulating large-strain deformation of linear and non-linear viscous materials, 3) post-processing of various physical fields such as velocity (total and perturbing), rate of deformation, finite strain, stress, deviatoric stress, pressure, apparent viscosity. FOLDER is designed to ensure maximum flexibility to configure model geometry, define material parameters, specify range of numerical parameters in simulations and choose the plotting options. FOLDER is an open source MATLAB application and comes with a user friendly graphical interface. The toolbox additionally comprises an educational application that illustrates various analytical solutions of growth rates calculated for the cases of folding and necking of a single layer with interfaces perturbed with a single sinusoidal waveform. We further derive two novel analytical expressions for the growth rate in the cases of folding and necking of a linear viscous layer embedded in a linear viscous medium of a finite thickness. We use FOLDER to test the accuracy of single-layer folding simulations using various 1) spatial and temporal resolutions, 2) time integration schemes, and 3) iterative algorithms for non-linear materials. The accuracy of the numerical results is quantified by: 1) comparing them to analytical solution, if available, or 2) running convergence tests. As a result, we provide a map of the most optimal choice of grid size, time step, and number of iterations to keep the results of the numerical simulations below a given error for a given time integration scheme. We also demonstrate that Euler and Leapfrog time integration schemes are not recommended for any practical use. Finally, the capabilities of the toolbox are illustrated based on two examples: 1) shortening of a synthetic multi-layer sequence and 2) extension of a folded quartz vein embedded in phyllite from Sprague Upper Reservoir (example discussed by Sherwin and Chapple [4]). The latter example demonstrates that FOLDER can be successfully used for reverse modelling and mechanical restoration. [1] Dabrowski, M., Krotkiewski, M., and Schmid, D. W., 2008, MILAMIN: MATLAB-based finite element method solver for large problems. Geochemistry Geophysics Geosystems, vol. 9. [2] Krotkiewski, M. and Dabrowski M., 2010 Parallel symmetric sparse matrix-vector product on scalar multi-core cpus. Parallel Computing, 36(4):181-198 [3] Shewchuk, J. R., 1996, Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator, In: Applied Computational Geometry: Towards Geometric Engineering'' (Ming C. Lin and Dinesh Manocha, editors), Vol. 1148 of Lecture Notes in Computer Science, pp. 203-222, Springer-Verlag, Berlin [4] Sherwin, J.A., Chapple, W.M., 1968. Wavelengths of single layer folds - a Comparison between theory and Observation. American Journal of Science 266 (3), p. 167-179
Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less
An analytical model of memristors in plants
Markin, Vladislav S; Volkov, Alexander G; Chua, Leon
2014-01-01
The memristor, a resistor with memory, was postulated by Chua in 1971 and the first solid-state memristor was built in 2008. Recently, we found memristors in vivo in plants. Here we propose a simple analytical model of 2 types of memristors that can be found within plants. The electrostimulation of plants by bipolar periodic waves induces electrical responses in the Aloe vera and Mimosa pudica with fingerprints of memristors. Memristive properties of the Aloe vera and Mimosa pudica are linked to the properties of voltage gated K+ ion channels. The potassium channel blocker TEACl transform plant memristors to conventional resistors. The analytical model of a memristor with a capacitor connected in parallel exhibits different characteristic behavior at low and high frequency of applied voltage, which is the same as experimental data obtained by cyclic voltammetry in vivo. PMID:25482769
ANALYTIC FORMS OF THE PERPENDICULAR DIFFUSION COEFFICIENT IN NRMHD TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalchi, A., E-mail: andreasm4@yahoo.com
2015-02-01
In the past different analytic limits for the perpendicular diffusion coefficient of energetic particles interacting with magnetic turbulence were discussed. These different limits or cases correspond to different transport modes describing how the particles are diffusing across the large-scale magnetic field. In the current paper we describe a new transport regime by considering the model of noisy reduced magnetohydrodynamic turbulence. We derive different analytic forms of the perpendicular diffusion coefficient, and while we do this, we focus on the aforementioned new transport mode. We show that for this turbulence model a small perpendicular diffusion coefficient can be obtained so thatmore » the latter diffusion coefficient is more than hundred times smaller than the parallel diffusion coefficient. This result is relevant to explain observations in the solar system where such small perpendicular diffusion coefficients have been reported.« less
Examining Cohort Effects in Developmental Trajectories of Substance Use
ERIC Educational Resources Information Center
Burns, Alison Reimuller; Hussong, Andrea M.; Solis, Jessica M.; Curran, Patrick J.; McGinley, James S.; Bauer, Daniel J.; Chassin, Laurie; Zucker, Robert A.
2017-01-01
The current study demonstrates the application of an analytic approach for incorporating multiple time trends in order to examine the impact of cohort effects on individual trajectories of eight drugs of abuse. Parallel analysis of two independent, longitudinal studies of high-risk youth that span ages 10 to 40 across 23 birth cohorts between 1968…
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Williams, Dean; Aloisio, Giovanni
2016-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.
Interaction of a conductive crack and of an electrode at a piezoelectric bimaterial interface
NASA Astrophysics Data System (ADS)
Onopriienko, Oleg; Loboda, Volodymyr; Sheveleva, Alla; Lapusta, Yuri
2018-06-01
The interaction of a conductive crack and an electrode at a piezoelectric bi-material interface is studied. The bimaterial is subjected to an in-plane electrical field parallel to the interface and an anti-plane mechanical loading. The problem is formulated and reduced, via the application of sectionally analytic vector functions, to a combined Dirichlet-Riemann boundary value problem. Simple analytical expressions for the stress, the electric field, and their intensity factors as well as for the crack faces' displacement jump are derived. Our numerical results illustrate the proposed approach and permit to draw some conclusions on the crack-electrode interaction.
In-situ spectrophotometric probe
Prather, William S.
1992-01-01
A spectrophotometric probe for in situ absorption spectra measurements comprising a first optical fiber carrying light from a remote light source, a second optical fiber carrying light to a remote spectrophotometer, the proximal ends of the first and second optical fibers parallel and coterminal, a planoconvex lens to collimate light from the first optical fiber, a reflecting grid positioned a short distance from the lens to reflect the collimated light back to the lens for focussing on the second optical fiber. The lens is positioned with the convex side toward the optical fibers. A substrate for absorbing analyte or an analyte and reagent mixture may be positioned between the lens and the reflecting grid.
NASA Astrophysics Data System (ADS)
Lumentut, M. F.; Howard, I. M.
2013-03-01
Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.
NASA Astrophysics Data System (ADS)
Sportelli, M. C.; Picca, R. A.; Manoli, K.; Re, M.; Pesce, E.; Tapfer, L.; Di Franco, C.; Cioffi, N.; Torsi, L.
2017-10-01
The analytical performance of bioelectronic devices is highly influenced by their fabrication methods. In particular, the final architecture of field-effect transistor biosensors combining spin-cast poly(3-hexylthiophene) (P3HT) film and a biomolecule interlayer deposited on a SiO2/Si substrate can lead to the development of highly performing sensing systems, such as for the case of streptavidin (SA) used for biotin sensing. To gain a better understanding of the quality of the interfacial area, critical is the assessment of the morphological features characteristic of the adopted biolayer deposition protocol, namely: the layer-by-layer (LbL) approach and the spin coating technique. The present study relies on a combined surface spectroscopic and morphological characterization. Specifically, X-ray photoelectron spectroscopy operated in the parallel angle-resolved mode allowed the non-destructive investigation of the in-depth chemical composition of the SA film, alone or in the presence of the P3HT overlayer. Spectroscopic data were supported and corroborated by the results obtained with a Scanning Electron and a Helium Ion microscope investigation performed on the SA layer that provided relevant information on the protein structural arrangement or on its surface morphology. Clear differences emerged between the SA layers prepared by the two approaches, with the layer-by-layer deposition resulting in a smoother and better defined bio-electronic interface. Such findings support the superior analytical performance shown by bioelectronic devices based on LbL-deposited protein layers over spin coated ones.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
2018-01-01
Signaling pathways represent parts of the global biological molecular network which connects them into a seamless whole through complex direct and indirect (hidden) crosstalk whose structure can change during development or in pathological conditions. We suggest a novel methodology, called Googlomics, for the structural analysis of directed biological networks using spectral analysis of their Google matrices, using parallels with quantum scattering theory, developed for nuclear and mesoscopic physics and quantum chaos. We introduce analytical “reduced Google matrix” method for the analysis of biological network structure. The method allows inferring hidden causal relations between the members of a signaling pathway or a functionally related group of genes. We investigate how the structure of hidden causal relations can be reprogrammed as a result of changes in the transcriptional network layer during cancerogenesis. The suggested Googlomics approach rigorously characterizes complex systemic changes in the wiring of large causal biological networks in a computationally efficient way. PMID:29370181
Regulation of T-cell receptor signalling by membrane microdomains
Razzaq, Tahir M; Ozegbe, Patricia; Jury, Elizabeth C; Sembi, Phupinder; Blackwell, Nathan M; Kabouridis, Panagiotis S
2004-01-01
There is now considerable evidence suggesting that the plasma membrane of mammalian cells is compartmentalized by functional lipid raft microdomains. These structures are assemblies of specialized lipids and proteins and have been implicated in diverse biological functions. Analysis of their protein content using proteomics and other methods revealed enrichment of signalling proteins, suggesting a role for these domains in intracellular signalling. In T lymphocytes, structure/function experiments and complementary pharmacological studies have shown that raft microdomains control the localization and function of proteins which are components of signalling pathways regulated by the T-cell antigen receptor (TCR). Based on these studies, a model for TCR phosphorylation in lipid rafts is presented. However, despite substantial progress in the field, critical questions remain. For example, it is unclear if membrane rafts represent a homogeneous population and if their structure is modified upon TCR stimulation. In the future, proteomics and the parallel development of complementary analytical methods will undoubtedly contribute in further delineating the role of lipid rafts in signal transduction mechanisms. PMID:15554919
NASA Technical Reports Server (NTRS)
Martin, J. A.
1974-01-01
A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.
NASA Astrophysics Data System (ADS)
Etaiw, Safaa El-din H.; Abd El-Aziz, Dina M.; Marie, Hassan; Ali, Elham
2018-05-01
Two new supramolecular coordination polymers namely {[Cd(NA)2(H2O)]}, SCP 1 and {[Pb(NA)2]}, SCP 2, (NA = nicotinate ligand) were synthesized by self-assembly method and structurally characterized by different analytical and spectroscopic methods. Single-crystal X-ray diffraction showed that SCP 1 extend in three dimensions containing bore structure where the 3D- network is constructed via interweaving zigzag chains. The Cd atom coordinates to (O4N2) atoms forming distorted-octahedral configuration. The structure of SCP 2 extend down the projection of the b-axis creating parallel zigzag 1D-chains connected by μ2-O2 atoms and H-bonds forming a holodirected lead (II) hexagonal bi-pyramid configuration. SCP 2 extend to 3D-network via coordinate and hydrogen bonds. The thermal stability, photoluminescence properties, photocatalytic activity for the degradation of methylene blue dye (MB) under UV-irradiation and sunlight irradiation were also studied.
Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.
Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N
2012-01-16
We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.
NASA Technical Reports Server (NTRS)
Ricks, R. C. (Compiler); Lushbaugh, C. C. (Compiler)
1975-01-01
The radiobiologic studies carried out with joint (AEC) ERDA and NASA support during the years 1964 to 1974 at the Medical Division of Oak Ridge Associated Universities are presented. The physiologic data generated were similar in many ways to those previously observed in other medical radiobiologic experiences. They differed, however, in the methods of data acquisition and analysis. Instead of more conventional analytical methods, pulmonary impedance was recorded and quantitated as a measure of radiation-induced gastrointestinal distress and fatiguability. While refinements in dose response related to gastrointestinal distress were accomplished, it was also found that through the use of Fourier analysis of pulmonary impedance waveform GI distress could easily be recognized and quantified even when the initial stages of nausea were below the subjects subjective level of recognition. The results demonstrate that change in pulmonary impedance waveform closely parallel well-defined stages of GI distress, i.e., initial nausea, a progressive increase in nausea, and finally vomiting episodes.
Metrology for hydrogen energy applications: a project to address normative requirements
NASA Astrophysics Data System (ADS)
Haloua, Frédérique; Bacquart, Thomas; Arrhenius, Karine; Delobelle, Benoît; Ent, Hugo
2018-03-01
Hydrogen represents a clean and storable energy solution that could meet worldwide energy demands and reduce greenhouse gases emission. The joint research project (JRP) ‘Metrology for sustainable hydrogen energy applications’ addresses standardisation needs through pre- and co-normative metrology research in the fast emerging sector of hydrogen fuel that meet the requirements of the European Directive 2014/94/EU by supplementing the revision of two ISO standards that are currently too generic to enable a sustainable implementation of hydrogen. The hydrogen purity dispensed at refueling points should comply with the technical specifications of ISO 14687-2 for fuel cell electric vehicles. The rapid progress of fuel cell technology now requires revising this standard towards less constraining limits for the 13 gaseous impurities. In parallel, optimized validated analytical methods are proposed to reduce the number of analyses. The study aims also at developing and validating traceable methods to assess accurately the hydrogen mass absorbed and stored in metal hydride tanks; this is a research axis for the revision of the ISO 16111 standard to develop this safe storage technique for hydrogen. The probability of hydrogen impurity presence affecting fuel cells and analytical techniques for traceable measurements of hydrogen impurities will be assessed and new data of maximum concentrations of impurities based on degradation studies will be proposed. Novel validated methods for measuring the hydrogen mass absorbed in hydrides tanks AB, AB2 and AB5 types referenced to ISO 16111 will be determined, as the methods currently available do not provide accurate results. The outputs here will have a direct impact on the standardisation works for ISO 16111 and ISO 14687-2 revisions in the relevant working groups of ISO/TC 197 ‘Hydrogen technologies’.
NASA Astrophysics Data System (ADS)
Peng, Bo; Blackman, Eric
2018-01-01
Closely interacting binary stars can incur Common Envelope Evolution (CEE) when at least one of the stars enters a giant phase. The extent to which CEE leads to envelope ejection and how tight the binaries become after CEE as a function of the mass and type of the companion stars has a broad range of phenomenological implications for both low mass and high mass binary stellar systems. Global simulations of CEE are emerging, but to understand the underlying physics of CEE and make connections with analytic formalisms, it helpful to employ reduced numerical models. Here we present results and analyses from simulations of gravitational drag using a Cartesian approach. Using AstroBEAR, a parallelized hydrodynamic/MHD simulation code, we simulate a system in which a 0.1 MSun main sequence secondary star is embedded in gas characteristic of the Envelope of a 3 MSun AGB star. The relative motion of the secondary star against the stationary envelope is represented by a supersonic wind that immerses a point particle, which is initially at rest, yet gradually dragged by the wind. Our approach differs from previous related wind-tunnel work by MacLeod et al. (2015,2017) in that we allow the particle to be displaced, offering a direct measurement of the drag force from its motion. We verify the validity of our method, extract the accretion rate of material in the wake via numerical integration, and compare the results between our method and previous work. We also use the results to help constrain the efficiency parameter in widely used analytic parameterizations of CEE.
Reiser, Vladimír; Smith, Ryan C; Xue, Jiyan; Kurtz, Marc M; Liu, Rong; Legrand, Cheryl; He, Xuanmin; Yu, Xiang; Wong, Peggy; Hinchcliffe, John S; Tanen, Michael R; Lazar, Gloria; Zieba, Renata; Ichetovkin, Marina; Chen, Zhu; O'Neill, Edward A; Tanaka, Wesley K; Marton, Matthew J; Liao, Jason; Morris, Mark; Hailman, Eric; Tokiwa, George Y; Plump, Andrew S
2011-11-01
With expanding biomarker discovery efforts and increasing costs of drug development, it is critical to maximize the value of mass-limited clinical samples. The main limitation of available methods is the inability to isolate and analyze, from a single sample, molecules requiring incompatible extraction methods. Thus, we developed a novel semiautomated method for tissue processing and tissue milling and division (TMAD). We used a SilverHawk atherectomy catheter to collect atherosclerotic plaques from patients requiring peripheral atherectomy. Tissue preservation by flash freezing was compared with immersion in RNAlater®, and tissue grinding by traditional mortar and pestle was compared with TMAD. Comparators were protein, RNA, and lipid yield and quality. Reproducibility of analyte yield from aliquots of the same tissue sample processed by TMAD was also measured. The quantity and quality of biomarkers extracted from tissue prepared by TMAD was at least as good as that extracted from tissue stored and prepared by traditional means. TMAD enabled parallel analysis of gene expression (quantitative reverse-transcription PCR, microarray), protein composition (ELISA), and lipid content (biochemical assay) from as little as 20 mg of tissue. The mean correlation was r = 0.97 in molecular composition (RNA, protein, or lipid) between aliquots of individual samples generated by TMAD. We also demonstrated that it is feasible to use TMAD in a large-scale clinical study setting. The TMAD methodology described here enables semiautomated, high-throughput sampling of small amounts of heterogeneous tissue specimens by multiple analytical techniques with generally improved quality of recovered biomolecules.
Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall
2016-07-01
Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.
Some fast elliptic solvers on parallel architectures and their complexities
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.
A Theoretical Study of Cold Air Damming.
NASA Astrophysics Data System (ADS)
Xu, Qin
1990-12-01
The dynamics of cold air damming are examined analytically with a two-layer steady state model. The upper layer is a warm and saturated cross-mountain (easterly or southeasterly onshore) flow. The lower layer is a cold mountain-parallel (northerly) jet trapped on the windward (eastern) side of the mountain. The interface between the two layers represents a coastal front-a sloping inversion layer coupling the trapped cold dome with the warm onshore flow above through pressure continuity.An analytical expression is obtained for the inviscid upper-layer flow with hydrostatic and moist adiabatic approximations. Blackadar's PBL parameterization of eddy viscosity is used in the lower-layer equations. Solutions for the mountain-parallel jet and its associated secondary transverse circulation are obtained by expanding asymptotically upon a small parameter proportional to the square root of the inertial aspect ratio-the ratio between the mountain height and the radius of inertial oscillation. The geometric shape of the sloping interface is solved numerically from a differential-integral equation derived from the pressure continuity condition imposed at the interface.The observed flow structures and force balances of cold air damming events are produced qualitatively by the model. In the cold dome the mountain-parallel jet is controlled by the competition between the mountain-parallel pressure gradient and friction: the jet is stronger with smoother surfaces, higher mountains, and faster mountain-normal geostrophic winds. In the mountain-normal direction the vertically averaged force balance in the cold dome is nearly geostrophic and controls the geometric shape of the cold dome. The basic mountain-normal pressure gradient generated in the cold dome by the negative buoyancy distribution tends to flatten the sloping interface and expand the cold dome upstream against the mountain-normal pressure gradient (produced by the upper-layer onshore wind) and Coriolis force (induced by the lower-layer mountain-parallel jet). It is found that the interface slope increases and the cold dome shrinks as the Froude number and/or upstream mountain-parallel geostrophic wind increase, or as the Rossby number, upper-layer depth, and/or surface roughness length decrease, and vice versa. The cold dome will either vanish or not be in a steady state if the Froude number is large enough or the roughness length gets too small. The theoretical findings are explained physically based on detailed analyses of the force balance along the inversion interface.
Deployment of Analytics into the Healthcare Safety Net: Lessons Learned
Hartzband, David; Jacobs, Feygele
2016-01-01
Background As payment reforms shift healthcare reimbursement toward value-based payment programs, providers need the capability to work with data of greater complexity, scope and scale. This will in many instances necessitate a change in understanding of the value of data, and the types of data needed for analysis to support operations and clinical practice. It will also require the deployment of different infrastructure and analytic tools. Community health centers, which serve more than 25 million people and together form the nation’s largest single source of primary care for medically underserved communities and populations, are expanding and will need to optimize their capacity to leverage data as new payer and organizational models emerge. Methods To better understand existing capacity and help organizations plan for the strategic and expanded uses of data, a project was initiated that deployed contemporary, Hadoop-based, analytic technology into several multi-site community health centers (CHCs) and a primary care association (PCA) with an affiliated data warehouse supporting health centers across the state. An initial data quality exercise was carried out after deployment, in which a number of analytic queries were executed using both the existing electronic health record (EHR) applications and in parallel, the analytic stack. Each organization carried out the EHR analysis using the definitions typically applied for routine reporting. The analysis deploying the analytic stack was carried out using those common definitions established for the Uniform Data System (UDS) by the Health Resources and Service Administration.1 In addition, interviews with health center leadership and staff were completed to understand the context for the findings. Results The analysis uncovered many challenges and inconsistencies with respect to the definition of core terms (patient, encounter, etc.), data formatting, and missing, incorrect and unavailable data. At a population level, apparent underreporting of a number of diagnoses, specifically obesity and heart disease, was also evident in the results of the data quality exercise, for both the EHR-derived and stack analytic results. Conclusion Data awareness, that is, an appreciation of the importance of data integrity, data hygiene2 and the potential uses of data, needs to be prioritized and developed by health centers and other healthcare organizations if analytics are to be used in an effective manner to support strategic objectives. While this analysis was conducted exclusively with community health center organizations, its conclusions and recommendations may be more broadly applicable. PMID:28210424
Atkinson, Quentin D; Gray, Russell D
2005-08-01
In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
Parallel manipulation of individual magnetic microbeads for lab-on-a-chip applications
NASA Astrophysics Data System (ADS)
Peng, Zhengchun
Many scientists and engineers are turning to lab-on-a-chip systems for faster and cheaper analysis of chemical reactions and biomolecular interactions. A common approach that facilitates the handling of reagents and biomolecules in these systems utilizes micro/nano beads as the solid carrier. Physical manipulation, such as assembly, transport, sorting, and tweezing, of beads on a chip represents an essential step for fully utilizing their potentials in a wide spectrum of bead-based analysis. Previous work demonstrated manipulation of either an ensemble of beads without individual control, or single beads but lacks the capability for parallel operation. Parallel manipulation of individual beads is required to meet the demand for high-throughput and location-specific analysis. In this work, we introduced two methods for parallel manipulation of individual magnetic microbeads, which can serve as effective lab-on-a-chip platforms and/or efficient analytic tools. The first method employs arrays of soft ferromagnetic patterns fabricated inside a microfluidic channel and subjected to an external magnetic field. We demonstrated that the system can be used to assemble individual beads (1-3 mum) from a flow of suspended beads into a regular array on the chip, hence improving the integrated electrochemical detection of biomolecules bound to the bead surface. By rotating the external field, the assembled microbeads can be remotely controlled with synchronized, high-speed circular motion around individual soft magnets on the chip. We employed this manipulation mode for efficient sample mixing in continuous microflow. Furthermore, we discovered a simple but effective way of transporting the microbeads on the chip by varying the strength of the local bias field within a revolution of the external field. In addition, selective transport of microbeads with different size was realized, providing a platform for effective on-chip sample separation and offering the potential for multiplexing capability. The second method integrates magnetic and dielectrophoretic manipulations of the same microbeads. The device combines tapered conducting wires and fingered electrodes to generate desirable magnetic and electric fields, respectively. By externally programming the magnetic attraction and dielectrophoretic repulsion forces, out-of-plane oscillation of the microbeads across the channel height was realized. This manipulation mode can facilitate the interaction between the beads with multiple layers of sample fluid inside the channel. We further demonstrated the tweezing of microbeads in liquid with high spatial resolutions, i.e., from submicrometer to nanometer range, by fine-tuning the net force from magnetic attraction and dielectrophoretic repulsion of the beads. The highresolution control of the out-of-plane motion of the microbeads led to the invention of massively parallel biomolecular tweezers. We believe the maturation of bead-based microtweezers will revolutionize the state-of-art tools currently used for single cell and single molecule studies.
Dollfus, Sonia; Lecardeur, Laurent; Morello, Rémy; Etard, Olivier
2016-01-01
Several meta-analyses have assessed the response of patients with schizophrenia with auditory verbal hallucinations (AVH) to treatment with repetitive transcranial magnetic stimulation (rTMS); however, the placebo response has never been explored. Typically observed in a therapeutic trial, the placebo effect may have a major influence on the effectiveness of rTMS. The purpose of this meta-analysis is to evaluate the magnitude of the placebo effect observed in controlled studies of rTMS treatment of AVH, and to determine factors that can impact the magnitude of this placebo effect, such as study design considerations and the type of sham used. The study included twenty-one articles concerning 303 patients treated by sham rTMS. A meta-analytic method was applied to obtain a combined, weighted effect size, Hedges’s g. The mean weighted effect size of the placebo effect across these 21 studies was 0.29 (P < .001). Comparison of the parallel and crossover studies revealed distinct results for each study design; placebo has a significant effect size in the 13 parallel studies (g = 0.44, P < 10−4), but not in the 8 crossover studies (g = 0.06, P = .52). In meta-analysis of the 13 parallel studies, the 45° position coil showed the highest effect size. Our results demonstrate that placebo effect should be considered a major source of bias in the assessment of rTMS efficacy. These results fundamentally inform the design of further controlled studies, particularly with respect to studies of rTMS treatment in psychiatry. PMID:26089351
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Albedo of an irradiated plane-parallel atmosphere with finite optical depth
NASA Astrophysics Data System (ADS)
Fukue, Jun
2018-03-01
We analytically derive albedo for a plane-parallel atmosphere with finite optical depth, irradiated by an external source, under the local thermodynamic equilibrium approximation. Albedo is expressed as a function of the photon destruction probability ɛ and optical depth τ, with several parameters such as dilution factors of the external source. In the particular case of the infinite optical depth, albedo A is expressed as A=[1 + (1-W_J/W_H)√{3ɛ}/3]/(1+√{3ɛ}), where WJ and WH are the dilution factors for the mean intensity and Eddington flux, respectively. An example of a model atmosphere is also presented under a gray approximation.
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Viscous, resistive MHD stability computed by spectral techniques
NASA Technical Reports Server (NTRS)
Dahlburg, R. B.; Zang, T. A.; Montgomery, D.; Hussaini, M. Y.
1983-01-01
Expansions in Chebyshev polynomials are used to study the linear stability of one dimensional magnetohydrodynamic (MHD) quasi-equilibria, in the presence of finite resistivity and viscosity. The method is modeled on the one used by Orszag in accurate computation of solutions of the Orr-Sommerfeld equation. Two Reynolds like numbers involving Alfven speeds, length scales, kinematic viscosity, and magnetic diffusivity govern the stability boundaries, which are determined by the geometric mean of the two Reynolds like numbers. Marginal stability curves, growth rates versus Reynolds like numbers, and growth rates versus parallel wave numbers are exhibited. A numerical result which appears general is that instability was found to be associated with inflection points in the current profile, though no general analytical proof has emerged. It is possible that nonlinear subcritical three dimensional instabilities may exist, similar to those in Poiseuille and Couette flow.
Signal amplification by rolling circle amplification on DNA microarrays
Nallur, Girish; Luo, Chenghua; Fang, Linhua; Cooley, Stephanie; Dave, Varshal; Lambert, Jeremy; Kukanskis, Kari; Kingsmore, Stephen; Lasken, Roger; Schweitzer, Barry
2001-01-01
While microarrays hold considerable promise in large-scale biology on account of their massively parallel analytical nature, there is a need for compatible signal amplification procedures to increase sensitivity without loss of multiplexing. Rolling circle amplification (RCA) is a molecular amplification method with the unique property of product localization. This report describes the application of RCA signal amplification for multiplexed, direct detection and quantitation of nucleic acid targets on planar glass and gel-coated microarrays. As few as 150 molecules bound to the surface of microarrays can be detected using RCA. Because of the linear kinetics of RCA, nucleic acid target molecules may be measured with a dynamic range of four orders of magnitude. Consequently, RCA is a promising technology for the direct measurement of nucleic acids on microarrays without the need for a potentially biasing preamplification step. PMID:11726701
NASA Astrophysics Data System (ADS)
Ahmed, Naveed; Adnan; Khan, Umar; Tauseef Mohyud-Din, Syed; Waheed, Asif
2017-07-01
This paper aims to explore the flow of water saturated with copper nanoparticles of different shapes between parallel Riga plates. The plates are placed horizontally in the coordinate axis. Influence of the linear thermal radiation is also taken into account. The equations governing the flow have been transformed into a nondimensional form by employing a set of similarity transformations. The obtained system is solved analytically (variation-of-parameters method) and numerically (Runge-Kutta scheme). Under certain conditions, a special case of the model is also explored. Furthermore, influences of the physical quantities on velocity and thermal fields are discussed with the graphical aid over the domain of interest. The quantities of engineering and practical interest (skin friction coefficient and local rate of heat transfer) are also explored graphically.
Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs
NASA Technical Reports Server (NTRS)
Deepak, A.; Vaughan, O. H.
1978-01-01
In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.