Sample records for solve large systems

  1. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  2. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  3. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  4. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  5. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  6. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  7. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  8. Parallel/distributed direct method for solving linear systems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

  9. Combined fast multipole-QR compression technique for solving electrically small to large structures for broadband applications

    NASA Technical Reports Server (NTRS)

    Jandhyala, Vikram (Inventor); Chowdhury, Indranil (Inventor)

    2011-01-01

    An approach that efficiently solves for a desired parameter of a system or device that can include both electrically large fast multipole method (FMM) elements, and electrically small QR elements. The system or device is setup as an oct-tree structure that can include regions of both the FMM type and the QR type. An iterative solver is then used to determine a first matrix vector product for any electrically large elements, and a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large elements and the electrically small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter.

  10. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking

  11. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  12. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  13. An efficient strongly coupled immersed boundary method for deforming bodies

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  14. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning.

    PubMed

    Custers, Eugène J F M

    2013-08-01

    Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: "System 1," the unconscious, intuitive, or nonanalytic system, and "System 2," the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.This article outlines the difficulties of this "two-system view" and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called "quasirationality." Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.

  15. A Game Based e-Learning System to Teach Artificial Intelligence in the Computer Sciences Degree

    ERIC Educational Resources Information Center

    de Castro-Santos, Amable; Fajardo, Waldo; Molina-Solana, Miguel

    2017-01-01

    Our students taking the Artificial Intelligence and Knowledge Engineering courses often encounter a large number of problems to solve which are not directly related to the subject to be learned. To solve this problem, we have developed a game based e-learning system. The elected game, that has been implemented as an e-learning system, allows to…

  16. Lexical Problems in Large Distributed Information Systems.

    ERIC Educational Resources Information Center

    Berkovich, Simon Ya; Shneiderman, Ben

    1980-01-01

    Suggests a unified concept of a lexical subsystem as part of an information system to deal with lexical problems in local and network environments. The linguistic and control functions of the lexical subsystems in solving problems for large computer systems are described, and references are included. (Author/BK)

  17. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  18. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  19. The mathematical statement for the solving of the problem of N-version software system design

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  20. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  1. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  2. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  3. Reasoning by analogy as an aid to heuristic theorem proving.

    NASA Technical Reports Server (NTRS)

    Kling, R. E.

    1972-01-01

    When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.

  4. Interaction Network Estimation: Predicting Problem-Solving Diversity in Interactive Environments

    ERIC Educational Resources Information Center

    Eagle, Michael; Hicks, Drew; Barnes, Tiffany

    2015-01-01

    Intelligent tutoring systems and computer aided learning environments aimed at developing problem solving produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled student-tutor interactions using complex networks in…

  5. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  6. Technique for Solving Electrically Small to Large Structures for Broadband Applications

    NASA Technical Reports Server (NTRS)

    Jandhyala, Vikram; Chowdhury, Indranil

    2011-01-01

    Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.

  7. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  8. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  9. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  10. Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carleton, James Brian; Parks, Michael L.

    Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less

  11. Engineering Antifragile Systems: A Change In Design Philosophy

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.

    2014-01-01

    While technology has made astounding advances in the last century, problems are confronting the engineering community that must be solved. Cost and schedule of producing large systems are increasing at an unsustainable rate and these systems often do not perform as intended. New systems are required that may not be achieved by current methods. To solve these problems, NASA is working to infuse concepts from Complexity Science into the engineering process. Some of these problems may be solved by a change in design philosophy. Instead of designing systems to meet known requirements that will always lead to fragile systems at some degree, systems should be designed wherever possible to be antifragile: designing cognitive cyberphysical systems that can learn from their experience, adapt to unforeseen events they face in their environment, and grow stronger in the face of adversity. Several examples are presented of on ongoing research efforts to employ this philosophy.

  12. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  13. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    PubMed

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  14. MOOSE: A PARALLEL COMPUTATIONAL FRAMEWORK FOR COUPLED SYSTEMS OF NONLINEAR EQUATIONS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Hansen; C. Newman; D. Gaston

    Systems of coupled, nonlinear partial di?erential equations often arise in sim- ulation of nuclear processes. MOOSE: Multiphysics Ob ject Oriented Simulation Environment, a parallel computational framework targeted at solving these systems is presented. As opposed to traditional data / ?ow oriented com- putational frameworks, MOOSE is instead founded on mathematics based on Jacobian-free Newton Krylov (JFNK). Utilizing the mathematical structure present in JFNK, physics are modularized into “Kernels” allowing for rapid production of new simulation tools. In addition, systems are solved fully cou- pled and fully implicit employing physics based preconditioning allowing for a large amount of ?exibility even withmore » large variance in time scales. Background on the mathematics, an inspection of the structure of MOOSE and several rep- resentative solutions from applications built on the framework are presented.« less

  15. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Edmond

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  17. Designing Interaction as a Learning Process: Supporting Users' Domain Knowledge Development in Interaction

    ERIC Educational Resources Information Center

    Choi, Jung-Min

    2010-01-01

    The primary concern in current interaction design is focused on how to help users solve problems and achieve goals more easily and efficiently. While users' sufficient knowledge acquisition of operating a product or system is considered important, their acquisition of problem-solving knowledge in the task domain has largely been disregarded. As a…

  18. Empirical results on scheduling and dynamic backtracking

    NASA Technical Reports Server (NTRS)

    Boddy, Mark S.; Goldman, Robert P.

    1994-01-01

    At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.

  19. An approximately factored incremental strategy for calculating consistent discrete aerodynamic sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, V. M.; Taylor, A. C., III; Newman, P. A.; Hou, G. J.-W.; Jones, H. E.

    1992-01-01

    An incremental strategy is presented for iteratively solving very large systems of linear equations, which are associated with aerodynamic sensitivity derivatives for advanced CFD codes. It is shown that the left-hand side matrix operator and the well-known factorization algorithm used to solve the nonlinear flow equations can also be used to efficiently solve the linear sensitivity equations. Two airfoil problems are considered as an example: subsonic low Reynolds number laminar flow and transonic high Reynolds number turbulent flow.

  20. Improved numerical solutions for chaotic-cancer-model

    NASA Astrophysics Data System (ADS)

    Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair

    2017-01-01

    In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.

  1. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  2. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  3. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  4. Scalable software architectures for decision support.

    PubMed

    Musen, M A

    1999-12-01

    Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  5. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  6. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  7. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  8. Weighted least squares phase unwrapping based on the wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  9. Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    NASA Technical Reports Server (NTRS)

    Gupta, V. K.; Zillmer, S. D.; Allison, R. E.

    1986-01-01

    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.

  10. Internet computer coaches for introductory physics problem solving

    NASA Astrophysics Data System (ADS)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  11. The Use of Iterative Linear-Equation Solvers in Codes for Large Systems of Stiff IVPs (Initial-Value Problems) for ODEs (Ordinary Differential Equations).

    DTIC Science & Technology

    1984-04-01

    numerical solution, of sstem ot stiff Wh-f Cr ODs. Fro- qontl. a substantial portia of the total computationskwok and cooap required! to solve stiff...exep, possl- bly, foreciadalms of problem. That is% a syste of linewat o nonlinear algebrac equa- tion mumt be solved at auk step of the numerical ...onjugate gradient method [431 is a mall-know ezuze, have prove to be particularly -2- efecti for solving the linear stwem that &ise in the numerical

  12. A two-qubit photonic quantum processor and its application to solving systems of linear equations

    PubMed Central

    Barz, Stefanie; Kassal, Ivan; Ringbauer, Martin; Lipp, Yannick Ole; Dakić, Borivoje; Aspuru-Guzik, Alán; Walther, Philip

    2014-01-01

    Large-scale quantum computers will require the ability to apply long sequences of entangling gates to many qubits. In a photonic architecture, where single-qubit gates can be performed easily and precisely, the application of consecutive two-qubit entangling gates has been a significant obstacle. Here, we demonstrate a two-qubit photonic quantum processor that implements two consecutive CNOT gates on the same pair of polarisation-encoded qubits. To demonstrate the flexibility of our system, we implement various instances of the quantum algorithm for solving of systems of linear equations. PMID:25135432

  13. AZTEC. Parallel Iterative method Software for Solving Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.; Shadid, J.; Tuminaro, R.

    1995-07-01

    AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less

  14. A multilevel correction adaptive finite element method for Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  15. Biomimetics: its practice and theory.

    PubMed

    Vincent, Julian F V; Bogatyreva, Olga A; Bogatyrev, Nikolaj R; Bowyer, Adrian; Pahl, Anja-Karina

    2006-08-22

    Biomimetics, a name coined by Otto Schmitt in the 1950s for the transfer of ideas and analogues from biology to technology, has produced some significant and successful devices and concepts in the past 50 years, but is still empirical. We show that TRIZ, the Russian system of problem solving, can be adapted to illuminate and manipulate this process of transfer. Analysis using TRIZ shows that there is only 12% similarity between biology and technology in the principles which solutions to problems illustrate, and while technology solves problems largely by manipulating usage of energy, biology uses information and structure, two factors largely ignored by technology.

  16. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  17. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  18. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.

  19. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  20. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  1. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed Central

    Musen, M. A.

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181

  2. Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.

    PubMed

    Musen, M A

    1998-01-01

    When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  3. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  4. Coordinated interaction of two hydraulic cylinders when moving large-sized objects

    NASA Astrophysics Data System (ADS)

    Kreinin, G. V.; Misyurin, S. Yu; Lunev, A. V.

    2017-12-01

    The problem of the choice of parameters and the control scheme of the dynamics system for the coordinated displacement of a large mass object by two hydraulic piston type engines is considered. As a first stage, the problem is solved with respect to a system in which a heavy load of relatively large geometric dimensions is lifted or lowered in the progressive motion by two unidirectional hydraulic cylinders while maintaining the plane of the lifted object in a strictly horizontal position.

  5. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@yauza.ru; Pylyov, S. S.

    This paper presents the formulation of the problem and the methodical approach for solving large systems of linear differential equations describing nonstationary processes with the use of CUDA technology; this approach is implemented in the ANGEL program. Results for a test problem on transport of radioactive products over loops of a nuclear power plant are given. The possibilities for the use of the ANGEL program for solving various problems that simulate arbitrary nonstationary processes are discussed.

  7. Perspectives on the Use of the Problem-Solving Model from the Viewpoint of a School Psychologist, Administrator, and Teacher from a Large Midwest Urban School District

    ERIC Educational Resources Information Center

    Lau, Matthew Y.; Sieler, Jay D.; Muyskens, Paul; Canter, Andrea; VanKeuren, Barbara; Marston, Doug

    2006-01-01

    The Minneapolis Public School System has been implementing an intervention-based approach to special education placement. This Problem-Solving Model (PSM) was designed to de-emphasize the role of norm-referenced tests and to provide early instructional interventions. The basic outline of the PSM is to define the problem, determine the best…

  8. Conducting Automated Test Assembly Using the Premium Solver Platform Version 7.0 with Microsoft Excel and the Large-Scale LP/QP Solver Engine Add-In

    ERIC Educational Resources Information Center

    Cor, Ken; Alves, Cecilia; Gierl, Mark J.

    2008-01-01

    This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…

  9. Micro-bias and macro-performance.

    PubMed

    Seaver, S M D; Moreira, A A; Sales-Pardo, M; Malmgren, R D; Diermeier, D; Amaral, L A N

    2009-02-01

    We use agent-based modeling to investigate the effect of conservatism and partisanship on the efficiency with which large populations solve the density classification task - a paradigmatic problem for information aggregation and consensus building. We find that conservative agents enhance the populations' ability to efficiently solve the density classification task despite large levels of noise in the system. In contrast, we find that the presence of even a small fraction of partisans holding the minority position will result in deadlock or a consensus on an incorrect answer. Our results provide a possible explanation for the emergence of conservatism and suggest that even low levels of partisanship can lead to significant social costs.

  10. Biomimetics: its practice and theory

    PubMed Central

    Vincent, Julian F.V; Bogatyreva, Olga A; Bogatyrev, Nikolaj R; Bowyer, Adrian; Pahl, Anja-Karina

    2006-01-01

    Biomimetics, a name coined by Otto Schmitt in the 1950s for the transfer of ideas and analogues from biology to technology, has produced some significant and successful devices and concepts in the past 50 years, but is still empirical. We show that TRIZ, the Russian system of problem solving, can be adapted to illuminate and manipulate this process of transfer. Analysis using TRIZ shows that there is only 12% similarity between biology and technology in the principles which solutions to problems illustrate, and while technology solves problems largely by manipulating usage of energy, biology uses information and structure, two factors largely ignored by technology. PMID:16849244

  11. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  12. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  13. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  14. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  15. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  16. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  17. Jump phenomena. [large amplitude responses of nonlinear systems

    NASA Technical Reports Server (NTRS)

    Reiss, E. L.

    1980-01-01

    The paper considers jump phenomena composed of large amplitude responses of nonlinear systems caused by small amplitude disturbances. Physical problems where large jumps in the solution amplitude are important features of the response are described, including snap buckling of elastic shells, chemical reactions leading to combustion and explosion, and long-term climatic changes of the earth's atmosphere. A new method of rational functions was then developed which consists of representing the solutions of the jump problems as rational functions of the small disturbance parameter; this method can solve jump problems explicitly.

  18. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  19. Solving large scale unit dilemma in electricity system by applying commutative law

    NASA Astrophysics Data System (ADS)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  20. What's the Payoff?: Assessing the Efficacy of Student Response Systems

    ERIC Educational Resources Information Center

    Baumann, Zachary D.; Marchetti, Kathleen; Soltoff, Benjamin

    2015-01-01

    Student response systems, or "clickers," have been presented as a way of solving student engagement problems, particularly in large-enrollment classes. These devices provide real-time feedback to instructors, allowing them to understand what students are thinking and how well they comprehend material. As clickers become more common, it…

  1. Online Self-Organizing Social Systems: The Decentralized Future of Online Learning.

    ERIC Educational Resources Information Center

    Wiley, David A.; Edwards, Erin K.

    2002-01-01

    Describes an online self-organizing social system (OSOSS) which allows large numbers of individuals to self-organize in a highly decentralized manner to solve problems and accomplish other goals. Topics include scalability and bandwidth in online learning; self-organization; learning objects; instructional design underlying OSOSS, including…

  2. Creating Shared Instructional Products: An Alternative Approach to Improving Teaching

    ERIC Educational Resources Information Center

    Morris, Anne K.; Hiebert, James

    2011-01-01

    To solve two enduring problems in education--unacceptably large variation in learning opportunities for students across classrooms and little continuing improvement in the quality of instruction--the authors propose a system that centers on the creation of shared instructional products that guide classroom teaching. By examining systems outside…

  3. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  4. Stop Thief!

    ERIC Educational Resources Information Center

    American School and University, 1984

    1984-01-01

    A large urban school board has solved the security problem for electronically stored student records by refusing to file them on a dialup system and by requiring a notarized affidavit of custody before release to a parent or guardian. (TE)

  5. Solution of matrix equations using sparse techniques

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi

    1994-01-01

    The solution of large systems of matrix equations is key to the solution of a large number of scientific and engineering problems. This talk describes the sparse matrix solver developed at Langley which can routinely solve in excess of 263,000 equations in 40 seconds on one Cray C-90 processor. It appears that for large scale structural analysis applications, sparse matrix methods have a significant performance advantage over other methods.

  6. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  7. AI tools in computer based problem solving

    NASA Technical Reports Server (NTRS)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  8. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  9. Price schedules coordination for electricity pool markets

    NASA Astrophysics Data System (ADS)

    Legbedji, Alexis Motto

    2002-04-01

    We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.

  10. Solving modal equations of motion with initial conditions using MSC/NASTRAN DMAP. Part 1: Implementing exact mode superposition

    NASA Technical Reports Server (NTRS)

    Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.

    1993-01-01

    Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.

  11. Design and Analysis Techniques for Concurrent Blackboard Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mcmanus, John William

    1992-01-01

    Blackboard systems are a natural progression of knowledge-based systems into a more powerful problem solving technique. They provide a way for several highly specialized knowledge sources to cooperate to solve large, complex problems. Blackboard systems incorporate the concepts developed by rule-based and expert systems programmers and include the ability to add conventionally coded knowledge sources. The small and specialized knowledge sources are easier to develop and test, and can be hosted on hardware specifically suited to the task that they are solving. The Formal Model for Blackboard Systems was developed to provide a consistent method for describing a blackboard system. A set of blackboard system design tools has been developed and validated for implementing systems that are expressed using the Formal Model. The tools are used to test and refine a proposed blackboard system design before the design is implemented. My research has shown that the level of independence and specialization of the knowledge sources directly affects the performance of blackboard systems. Using the design, simulation, and analysis tools, I developed a concurrent object-oriented blackboard system that is faster, more efficient, and more powerful than existing systems. The use of the design and analysis tools provided the highly specialized and independent knowledge sources required for my concurrent blackboard system to achieve its design goals.

  12. Hierarchical optimal control of large-scale nonlinear chemical processes.

    PubMed

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  13. Solving very large, sparse linear systems on mesh-connected parallel computers

    NASA Technical Reports Server (NTRS)

    Opsahl, Torstein; Reif, John

    1987-01-01

    The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.

  14. Progress in protein crystallography.

    PubMed

    Dauter, Zbigniew; Wlodawer, Alexander

    2016-01-01

    Macromolecular crystallography evolved enormously from the pioneering days, when structures were solved by "wizards" performing all complicated procedures almost by hand. In the current situation crystal structures of large systems can be often solved very effectively by various powerful automatic programs in days or hours, or even minutes. Such progress is to a large extent coupled to the advances in many other fields, such as genetic engineering, computer technology, availability of synchrotron beam lines and many other techniques, creating the highly interdisciplinary science of macromolecular crystallography. Due to this unprecedented success crystallography is often treated as one of the analytical methods and practiced by researchers interested in structures of macromolecules, but not highly competent in the procedures involved in the process of structure determination. One should therefore take into account that the contemporary, highly automatic systems can produce results almost without human intervention, but the resulting structures must be carefully checked and validated before their release into the public domain.

  15. Development of distortion measurement system for large deployable antenna via photogrammetry in vacuum and cryogenic environment

    NASA Astrophysics Data System (ADS)

    Zhang, Pengsong; Jiang, Shanping; Yang, Linhua; Zhang, Bolun

    2018-01-01

    In order to meet the requirement of high precision thermal distortion measurement foraΦ4.2m deployable mesh antenna of satellite in vacuum and cryogenic environment, based on Digital Close-range Photogrammetry and Space Environment Test Technology of Spacecraft, a large scale antenna distortion measurement system under vacuum and cryogenic environment is developed in this paper. The antenna Distortion measurement system (ADMS) is the first domestic independently developed thermal distortion measurement system for large antenna, which has successfully solved non-contact high precision distortion measurement problem in large spacecraft structure under vacuum and cryogenic environment. The measurement accuracy of ADMS is better than 50 μm/5m, which has reached international advanced level. The experimental results show that the measurement system has great advantages in large structural measurement of spacecrafts, and also has broad application prospects in space or other related fields.

  16. Modeling flow at the nozzle of a solid rocket motor

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1991-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.

  17. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  18. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  19. Fostering Solutions: Bringing Brief-Therapy Principles and Practices to the Child Welfare System

    ERIC Educational Resources Information Center

    Flemons, Douglas; Liscio, Michele; Gordon, Arlene Brett; Hibel, James; Gutierrez-Hersh, Annette; Rebholz, Cynthia L.

    2010-01-01

    This article describes a 15-month university-community collaboration that was designed to fast-track children out of foster care. The developers of the project initiated resource-oriented "systems facilitations," allowing wraparound professionals and families to come together in large meetings to solve problems and find solutions. Families also…

  20. Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics.

    PubMed

    Jurling, Alden S; Fienup, James R

    2014-03-01

    Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.

  1. A Chess-Like Game for Teaching Engineering Students to Solve Large System of Simultaneous Linear Equations

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Mohammed, Ahmed Ali; Kadiam, Subhash

    2010-01-01

    Solving large (and sparse) system of simultaneous linear equations has been (and continues to be) a major challenging problem for many real-world engineering/science applications [1-2]. For many practical/large-scale problems, the sparse, Symmetrical and Positive Definite (SPD) system of linear equations can be conveniently represented in matrix notation as [A] {x} = {b} , where the square coefficient matrix [A] and the Right-Hand-Side (RHS) vector {b} are known. The unknown solution vector {x} can be efficiently solved by the following step-by-step procedures [1-2]: Reordering phase, Matrix Factorization phase, Forward solution phase, and Backward solution phase. In this research work, a Game-Based Learning (GBL) approach has been developed to help engineering students to understand crucial details about matrix reordering and factorization phases. A "chess-like" game has been developed and can be played by either a single player, or two players. Through this "chess-like" open-ended game, the players/learners will not only understand the key concepts involved in reordering algorithms (based on existing algorithms), but also have the opportunities to "discover new algorithms" which are better than existing algorithms. Implementing the proposed "chess-like" game for matrix reordering and factorization phases can be enhanced by FLASH [3] computer environments, where computer simulation with animated human voice, sound effects, visual/graphical/colorful displays of matrix tables, score (or monetary) awards for the best game players, etc. can all be exploited. Preliminary demonstrations of the developed GBL approach can be viewed by anyone who has access to the internet web-site [4]!

  2. The Effect of Incorporating Good Learners' Ratings in e-Learning Content-Based Recommender System

    ERIC Educational Resources Information Center

    Ghauth, Khairil Imran; Abdullah, Nor Aniza

    2011-01-01

    One of the anticipated challenges of today's e-learning is to solve the problem of recommending from a large number of learning materials. In this study, we introduce a novel architecture for an e-learning recommender system. More specifically, this paper comprises the following phases i) to propose an e-learning recommender system based on…

  3. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  4. A computational study of the use of an optimization-based method for simulating large multibody systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petra, C.; Gavrea, B.; Anitescu, M.

    2009-01-01

    The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemke-type algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a QPD, for whichmore » a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers.« less

  5. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  6. An Assessment of the Effect of Collaborative Groups on Students' Problem-Solving Strategies and Abilities

    ERIC Educational Resources Information Center

    Cooper, Melanie M.; Cox, Charles T., Jr.; Nammouz, Minory; Case, Edward; Stevens, Ronald

    2008-01-01

    Improving students' problem-solving skills is a major goal for most science educators. While a large body of research on problem solving exists, assessment of meaningful problem solving is very difficult, particularly for courses with large numbers of students in which one-on-one interactions are not feasible. We have used a suite of software…

  7. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  8. A preconditioner for the finite element computation of incompressible, nonlinear elastic deformations

    NASA Astrophysics Data System (ADS)

    Whiteley, J. P.

    2017-10-01

    Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.

  9. Parallel Numerical Simulations of Water Reservoirs

    NASA Astrophysics Data System (ADS)

    Torres, Pedro; Mangiavacchi, Norberto

    2010-11-01

    The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.

  10. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

  11. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  12. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  13. Quantum algorithm for energy matching in hard optimization problems

    NASA Astrophysics Data System (ADS)

    Baldwin, C. L.; Laumann, C. R.

    2018-06-01

    We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.

  14. Numerical Optimization Algorithms and Software for Systems Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  15. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Toomey, Bridget

    Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less

  17. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.

  18. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  19. Large-N -approximated field theory for multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Parisi, G.; Pascazio, S.; Scardicchio, A.

    2015-12-01

    We try to characterize the statistics of multipartite entanglement of the random states of an n -qubit system. Unable to solve the problem exactly we generalize it, replacing complex numbers with real vectors with Nc components (the original problem is recovered for Nc=2 ). Studying the leading diagrams in the large-Nc approximation, we unearth the presence of a phase transition and, in an explicit example, show that the so-called entanglement frustration disappears in the large-Nc limit.

  20. Terra II--A Spaceship Earth Simulation for the Middle Grades

    ERIC Educational Resources Information Center

    Mastrude, Peggy

    1972-01-01

    The unit of study consists of four lessons based on the concept that the earth is a large system made up of many small systems (air, food, water, man, etc.). Complete procedures are included to study the environment, examine developing countries, determine interaction between peoples and nations. The problem solving excercise is an inquiry…

  1. LANZ: Software solving the large sparse symmetric generalized eigenproblem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    A package, LANZ, for solving the large symmetric generalized eigenproblem is described. The package was tested on four different architectures: Convex 200, CRAY Y-MP, Sun-3, and Sun-4. The package uses a Lanczos' method and is based on recent research into solving the generalized eigenproblem.

  2. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  3. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are therefore suitable for the application on supercomputers (such as SuperMUC). Finally, (if time or space) some in-detail problems are shown that occur when dealing with high degree spherical harmonic base functions (mostly due to instabilities of Legendre polynomials), introducing also an appropriate solution for each.

  4. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  5. Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices

    NASA Astrophysics Data System (ADS)

    Polstyanko, Sergey V.; Lee, Jin-Fa

    1998-03-01

    In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.

  6. Engineering neural systems for high-level problem solving.

    PubMed

    Sylvester, Jared; Reggia, James

    2016-07-01

    There is a long-standing, sometimes contentious debate in AI concerning the relative merits of a symbolic, top-down approach vs. a neural, bottom-up approach to engineering intelligent machine behaviors. While neurocomputational methods excel at lower-level cognitive tasks (incremental learning for pattern classification, low-level sensorimotor control, fault tolerance and processing of noisy data, etc.), they are largely non-competitive with top-down symbolic methods for tasks involving high-level cognitive problem solving (goal-directed reasoning, metacognition, planning, etc.). Here we take a step towards addressing this limitation by developing a purely neural framework named galis. Our goal in this work is to integrate top-down (non-symbolic) control of a neural network system with more traditional bottom-up neural computations. galis is based on attractor networks that can be "programmed" with temporal sequences of hand-crafted instructions that control problem solving by gating the activity retention of, communication between, and learning done by other neural networks. We demonstrate the effectiveness of this approach by showing that it can be applied successfully to solve sequential card matching problems, using both human performance and a top-down symbolic algorithm as experimental controls. Solving this kind of problem makes use of top-down attention control and the binding together of visual features in ways that are easy for symbolic AI systems but not for neural networks to achieve. Our model can not only be instructed on how to solve card matching problems successfully, but its performance also qualitatively (and sometimes quantitatively) matches the performance of both human subjects that we had perform the same task and the top-down symbolic algorithm that we used as an experimental control. We conclude that the core principles underlying the galis framework provide a promising approach to engineering purely neurocomputational systems for problem-solving tasks that in people require higher-level cognitive functions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  8. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  9. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  10. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  11. Incomplete Gröbner basis as a preconditioner for polynomial systems

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Tao, Yu-Hui; Bai, Feng-Shan

    2009-04-01

    Precondition plays a critical role in the numerical methods for large and sparse linear systems. It is also true for nonlinear algebraic systems. In this paper incomplete Gröbner basis (IGB) is proposed as a preconditioner of homotopy methods for polynomial systems of equations, which transforms a deficient system into a system with the same finite solutions, but smaller degree. The reduced system can thus be solved faster. Numerical results show the efficiency of the preconditioner.

  12. The Daily Operational Brief: Fostering Daily Readiness, Care Coordination, and Problem-Solving Accountability in a Large Pediatric Health Care System.

    PubMed

    Donnelly, Lane F; Basta, Kathryne C; Dykes, Anne M; Zhang, Wei; Shook, Joan E

    2018-01-01

    At a pediatric health system, the Daily Operational Brief (DOB) was updated in 2015 after three years of operation. Quality and safety metrics, the patient volume and staffing assessment, and the readiness assessment are all presented. In addition, in the problem-solving accountability system, problematic issues are categorized as Quick Hits or Complex Issues. Walk-the-Wall, a biweekly meeting attended by hospital senior administrative leadership and quality and safety leaders, is conducted to chart current progress on Complex Issues. The DOB provides a daily standardized approach to evaluate readiness to provide care to current patients and improvement in the care to be provided for future patients. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  13. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  14. A Study of Novice Systems Analysis Problem Solving Behaviors Using Protocol Analysis

    DTIC Science & Technology

    1992-09-01

    conducted. Each subject was given the same task to perform. The task involved a case study (Appendix B) of a utility company’s customer order processing system...behavior (Ramesh, 1989). The task was to design a customer order processing system that utilized a centralized telephone answering service center...of the utility company’s customer order processing system that was developed based on information obtained by a large systems consulting firm during

  15. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  16. Determination of aerodynamic sensitivity coefficients based on the three-dimensional full potential equation

    NASA Technical Reports Server (NTRS)

    Elbanna, Hesham M.; Carlson, Leland A.

    1992-01-01

    The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.

  17. Use of thermal sieve to allow optical testing of cryogenic optical systems.

    PubMed

    Kim, Dae Wook; Cai, Wenrui; Burge, James H

    2012-05-21

    Full aperture testing of large cryogenic optical systems has been impractical due to the difficulty of operating a large collimator at cryogenic temperatures. The Thermal Sieve solves this problem by acting as a thermal barrier between an ambient temperature collimator and the cryogenic system under test. The Thermal Sieve uses a set of thermally controlled baffles with array of holes that are lined up to pass the light from the collimator without degrading the wavefront, while attenuating the thermal background by nearly 4 orders of magnitude. This paper provides the theory behind the Thermal Sieve system, evaluates the optimization for its optical and thermal performance, and presents the design and analysis for a specific system.

  18. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  19. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.

  20. Rapid prototyping strategy for a surgical data warehouse.

    PubMed

    Tang, S-T; Huang, Y-F; Hsiao, M-L; Yang, S-H; Young, S-T

    2003-01-01

    Healthcare processes typically generate an enormous volume of patient information. This information largely represents unexploited knowledge, since current hospital operational systems (e.g., HIS, RIS) are not suitable for knowledge exploitation. Data warehousing provides an attractive method for solving these problems, but the process is very complicated. This study presents a novel strategy for effectively implementing a healthcare data warehouse. This study adopted the rapid prototyping (RP) method, which involves intensive interactions. System developers and users were closely linked throughout the life cycle of the system development. The presence of iterative RP loops meant that the system requirements were increasingly integrated and problems were gradually solved, such that the prototype system evolved into the final operational system. The results were analyzed by monitoring the series of iterative RP loops. First a definite workflow for ensuring data completeness was established, taking a patient-oriented viewpoint when collecting the data. Subsequently the system architecture was determined for data retrieval, storage, and manipulation. This architecture also clarifies the relationships among the novel system and legacy systems. Finally, a graphic user interface for data presentation was implemented. Our results clearly demonstrate the potential for adopting an RP strategy in the successful establishment of a healthcare data warehouse. The strategy can be modified and expanded to provide new services or support new application domains. The design patterns and modular architecture used in the framework will be useful in solving problems in different healthcare domains.

  1. NASTRAN computer system level 12.1

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1971-01-01

    Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.

  2. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  3. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  4. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  5. Composite solvers for linear saddle point problems arising from the incompressible Stokes equations with highly heterogeneous viscosity structure

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.

    2014-12-01

    Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.

  6. A Portable Computer System for Auditing Quality of Ambulatory Care

    PubMed Central

    McCoy, J. Michael; Dunn, Earl V.; Borgiel, Alexander E.

    1987-01-01

    Prior efforts to effectively and efficiently audit quality of ambulatory care based on comprehensive process criteria have been limited largely by the complexity and cost of data abstraction and management. Over the years, several demonstration projects have generated large sets of process criteria and mapping systems for evaluating quality of care, but these paper-based approaches have been impractical to implement on a routine basis. Recognizing that portable microcomputers could solve many of the technical problems in abstracting data from medical records, we built upon previously described criteria and developed a microcomputer-based abstracting system that facilitates reliable and cost-effective data abstraction.

  7. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  8. Engineering data management: Experience and projections

    NASA Technical Reports Server (NTRS)

    Jefferson, D. K.; Thomson, B.

    1978-01-01

    Experiences in developing a large engineering data management system are described. Problems which were encountered are presented and projected to future systems. Business applications involving similar types of data bases are described. A data base management system architecture proposed by the business community is described and its applicability to engineering data management is discussed. It is concluded that the most difficult problems faced in engineering and business data management can best be solved by cooperative efforts.

  9. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  10. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  11. An iterative method for systems of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1989-01-01

    An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.

  12. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  13. Network Polymers Formed Under Nonideal Conditions.

    DTIC Science & Technology

    1986-12-01

    the system or the limited ability of the statistical model to account for stochastic correlations. The viscosity of the reacting system was measured as...based on competing reactions (ring, chain) and employs equilibrium chain statistics . The work thus far has been limited to single cycle growth on an...polymerizations, because a large number of differential equations must be solved. The Makovian approach (sometimes referred to as the statistical or

  14. Cauldrons: An Abstraction for Concurrent Problems Solving. Revision.

    DTIC Science & Technology

    1986-09-01

    iaCtio1ns are the cauldron, a mechanism for organizing inference into distinct reasoning contexts: the flmse. a way of mnodularly describing the components...reasoning systems. 1 . Main Points This paper develops three mechanisms for organizing large distributed reasoning systems: Cauldrons -- A chunk of...introduction.s ’Thc goal-node/frame mechanism supports a useful abstraction over the hare cauldrons implementation, providing a protocol for organizing

  15. Seeking Space Aliens and the Strong Approximation Property: A (disjoint) Study in Dust Plumes on Planetary Satellites and Nonsymmetric Algebraic Multigrid

    NASA Astrophysics Data System (ADS)

    Southworth, Benjamin Scott

    PART I: One of the most fascinating questions to humans has long been whether life exists outside of our planet. To our knowledge, water is a fundamental building block of life, which makes liquid water on other bodies in the universe a topic of great interest. In fact, there are large bodies of water right here in our solar system, underneath the icy crust of moons around Saturn and Jupiter. The NASA-ESA Cassini Mission spent two decades studying the Saturnian system. One of the many exciting discoveries was a "plume" on the south pole of Enceladus, emitting hundreds of kg/s of water vapor and frozen water-ice particles from Enceladus' subsurface ocean. It has since been determined that Enceladus likely has a global liquid water ocean separating its rocky core from icy surface, with conditions that are relatively favorable to support life. The plume is of particular interest because it gives direct access to ocean particles from space, by flying through the plume. Recently, evidence has been found for similar geological activity occurring on Jupiter's moon Europa, long considered one of the most likely candidate bodies to support life in our solar system. Here, a model for plume-particle dynamics is developed based on studies of the Enceladus plume and data from the Cassini Cosmic Dust Analyzer. A C++, OpenMP/MPI parallel software package is then built to run large scale simulations of dust plumes on planetary satellites. In the case of Enceladus, data from simulations and the Cassini mission provide insight into the structure of emissions on the surface, the total mass production of the plume, and the distribution of particles being emitted. Each of these are fundamental to understanding the plume and, for Europa and Enceladus, simulation data provide important results for the planning of future missions to these icy moons. In particular, this work has contributed to the Europa Clipper mission and proposed Enceladus Life Finder. PART II: Solving large, sparse linear systems arises often in the modeling of biological and physical phenomenon, data analysis through graphs and networks, and other scientific applications. This work focuses primarily on linear systems resulting from the discretization of partial differential equations (PDEs). Because solving linear systems is the bottleneck of many large simulation codes, there is a rich field of research in developing "fast" solvers, with the ultimate goal being a method that solves an n x n linear system in O(n) operations. One of the most effective classes of solvers is algebraic multigrid (AMG), which is a multilevel iterative method based on projecting the problem into progressively smaller spaces, and scales like O(n) or O(nlog n) for certain classes of problems. The field of AMG is well-developed for symmetric positive definite matrices, and is typically most effective on linear systems resulting from the discretization of scalar elliptic PDEs, such as the heat equation. Systems of PDEs can add additional difficulties, but the underlying linear algebraic theory is consistent and, in many cases, an elliptic system of PDEs can be handled well by AMG with appropriate modifications of the solver. Solving general, nonsymmetric linear systems remains the wild west of AMG (and other fast solvers), lacking significant results in convergence theory as well as robust methods. Here, we develop new theoretical motivation and practical variations of AMG to solve nonsymmetric linear systems, often resulting from the discretization of hyperbolic PDEs. In particular, multilevel convergence of AMG for nonsymmetric systems is proven for the first time. A new nonsymmetric AMG solver is also developed based on an approximate ideal restriction, referred to as AIR, which is able to solve advection-dominated, hyperbolic-type problems that are outside the scope of existing AMG solvers and other fast iterative methods. AIR demonstrates scalable convergence on unstructured meshes, in multiple dimensions, and with high-order finite elements, expanding the applicability of AMG to a new class of problems.

  16. Estimating the Local Size and Coverage of Interaction Network Regions

    ERIC Educational Resources Information Center

    Eagle, Michael; Barnes, Tiffany

    2015-01-01

    Interactive problem solving environments, such as intelligent tutoring systems and educational video games, produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled the student-tutor interactions using complex network…

  17. Dueling Co-Authors: How Collaborators Create and Sometimes Solve Contributorship Conflicts

    ERIC Educational Resources Information Center

    Youtie, Jan; Bozeman, Barry

    2016-01-01

    Publishing is central to the academic reward system. Contributorship issues loom large in this context. The need for fairness in authorship decisions is upheld in most collaborations, yet some collaborations are plagued by "nightmare" issues ranging from inappropriate authorship credit to author order issues to exploitation of students…

  18. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  19. Self-consistent-field perturbation theory for the Schröautdinger equation

    NASA Astrophysics Data System (ADS)

    Goodson, David Z.

    1997-06-01

    A method is developed for using large-order perturbation theory to solve the systems of coupled differential equations that result from the variational solution of the Schröautdinger equation with wave functions of product form. This is a noniterative, computationally efficient way to solve self-consistent-field (SCF) equations. Possible applications include electronic structure calculations using products of functions of collective coordinates that include electron correlation, vibrational SCF calculations for coupled anharmonic oscillators with selective coupling of normal modes, and ab initio calculations of molecular vibration spectra without the Born-Oppenheimer approximation.

  20. GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Duboscq, Romain

    2015-08-01

    GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.

  1. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  2. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  3. Large-N kinetic theory for highly occupied systems

    NASA Astrophysics Data System (ADS)

    Walz, R.; Boguslavski, K.; Berges, J.

    2018-06-01

    We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.

  4. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  5. Difficult macromolecular structures determined using X-ray diffraction techniques.

    PubMed

    Hernández-Santoyo, Alejandra

    2012-07-01

    Macromolecular crystallography has been, for the last few decades, the main source of structural information of biological macromolecular systems and it is one of the most powerful techniques for the analysis of enzyme mechanisms and macromolecular interactions at the atomic level. In addition, it is also an extremely powerful tool for drug design. Recent technological and methodological developments in macromolecular X-ray crystallography have allowed solving structures that until recently were considered difficult or even impossible, such as structures at atomic or subatomic resolution or large macromolecular complexes and assemblies at low resolution. These developments have also helped to solve the 3D-structure of macromolecules from twin crystals. Recently, this technique complemented with cryo-electron microscopy and neutron crystallography has provided the structure of large macromolecular machines with great precision allowing understanding of the mechanisms of their function.

  6. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  7. Research on the adaptive optical control technology based on DSP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun

    2018-02-01

    Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.

  8. Hierarchical Medical System Based on Big Data and Mobile Internet: A New Strategic Choice in Health Care

    PubMed Central

    2017-01-01

    China is setting up a hierarchical medical system to solve the problems of biased resource allocation and high patient flows to large hospitals. The development of big data and mobile Internet technology provides a new perspective for the establishment of hierarchical medical system. This viewpoint discusses the challenges with the hierarchical medical system in China and how big data and mobile Internet can be used to mitigate these challenges. PMID:28790024

  9. The Cyclic Nature of Problem Solving: An Emergent Multidimensional Problem-Solving Framework

    ERIC Educational Resources Information Center

    Carlson, Marilyn P.; Bloom, Irene

    2005-01-01

    This paper describes the problem-solving behaviors of 12 mathematicians as they completed four mathematical tasks. The emergent problem-solving framework draws on the large body of research, as grounded by and modified in response to our close observations of these mathematicians. The resulting "Multidimensional Problem-Solving Framework" has four…

  10. Comparing direct and iterative equation solvers in a large structural analysis software system

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  11. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  12. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  13. Expert system technology

    NASA Technical Reports Server (NTRS)

    Prince, Mary Ellen

    1987-01-01

    The expert system is a computer program which attempts to reproduce the problem-solving behavior of an expert, who is able to view problems from a broad perspective and arrive at conclusions rapidly, using intuition, shortcuts, and analogies to previous situations. Expert systems are a departure from the usual artificial intelligence approach to problem solving. Researchers have traditionally tried to develop general modes of human intelligence that could be applied to many different situations. Expert systems, on the other hand, tend to rely on large quantities of domain specific knowledge, much of it heuristic. The reasoning component of the system is relatively simple and straightforward. For this reason, expert systems are often called knowledge based systems. The report expands on the foregoing. Section 1 discusses the architecture of a typical expert system. Section 2 deals with the characteristics that make a problem a suitable candidate for expert system solution. Section 3 surveys current technology, describing some of the software aids available for expert system development. Section 4 discusses the limitations of the latter. The concluding section makes predictions of future trends.

  14. Investigation of Conjugate Heat Transfer in Turbine Blades and Vanes

    NASA Technical Reports Server (NTRS)

    Kassab, A. J.; Kapat, J. S.

    2001-01-01

    We report on work carried out to develop a 3-D coupled Finite Volume/BEM-based temperature forward/flux back (TFFB) coupling algorithm to solve the conjugate heat transfer (CHT) which arises naturally in analysis of systems exposed to a convective environment. Here, heat conduction within a structure is coupled to heat transfer to the external fluid which is convecting heat into or out of the solid structure. There are two basic approaches to solving coupled fluid structural systems. The first is a direct coupling where the solution of the different fields is solved simultaneously in one large set of equations. The second approach is a loose coupling strategy where each set of field equations is solved to provide boundary conditions for the other. The equations are solved in turn until an iterated convergence criterion is met at the fluid-solid interface. The loose coupling strategy is particularly attractive when coupling auxiliary field equations to computational fluid dynamics codes. We adopt the latter method in which the BEM is used to solve heat conduction inside a structure which is exposed to a convective field which in turn is resolved by solving the NASA Glenn compressible Navier-Stokes finite volume code Glenn-HT. The BEM code features constant and bi-linear discontinuous elements and an ILU-preconditioned GMRES iterative solver for the resulting non-symmetric algebraic set arising in the conduction solution. Interface of flux and temperature is enforced at the solid/fluid interface, and a radial-basis function scheme is used to interpolated information between the CFD and BEM surface grids. Additionally, relaxation is implemented in passing the fluxes from the conduction solution to the fluid solution. Results from a simple test example are reported.

  15. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  16. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  17. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  18. Non-Linear Dynamics and Emergence in Laboratory Fusion Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hnat, B.

    2011-09-22

    Turbulent behaviour of laboratory fusion plasma system is modelled using extended Hasegawa-Wakatani equations. The model is solved numerically using finite difference techniques. We discuss non-linear effects in such a system in the presence of the micro-instabilities, specifically a drift wave instability. We explore particle dynamics in different range of parameters and show that the transport changes from diffusive to non-diffusive when large directional flows are developed.

  19. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Research on knowledge representation, machine learning, and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Buchanan, Bruce G.

    1987-01-01

    Research in knowledge representation, machine learning, and knowledge acquisition performed at Knowledge Systems Lab. is summarized. The major goal of the research was to develop flexible, effective methods for representing the qualitative knowledge necessary for solving large problems that require symbolic reasoning as well as numerical computation. The research focused on integrating different representation methods to describe different kinds of knowledge more effectively than any one method can alone. In particular, emphasis was placed on representing and using spatial information about three dimensional objects and constraints on the arrangement of these objects in space. Another major theme is the development of robust machine learning programs that can be integrated with a variety of intelligent systems. To achieve this goal, learning methods were designed, implemented and experimented within several different problem solving environments.

  1. Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm.

    PubMed

    Godfrin, C; Ferhat, A; Ballou, R; Klyatskaya, S; Ruben, M; Wernsdorfer, W; Balestro, F

    2017-11-03

    Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3/2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.

  2. Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm

    NASA Astrophysics Data System (ADS)

    Godfrin, C.; Ferhat, A.; Ballou, R.; Klyatskaya, S.; Ruben, M.; Wernsdorfer, W.; Balestro, F.

    2017-11-01

    Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3 /2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.

  3. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  4. Using adaptive grid in modeling rocket nozzle flow

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1992-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.

  5. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  6. The Genesis of Transactional Strategies Instruction in a Reading Program for At-Risk Students.

    ERIC Educational Resources Information Center

    Schuder, Ted

    1993-01-01

    Describes the problem-solving response of the staff of a large public school system to at-risk students' low achievement and lack of access to the district's enriched reading and language arts curriculum. Examines the development and implementation of the Students Achieving Independent Learning (SAIL) program. (MDM)

  7. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  8. Proposed solution methodology for the dynamically coupled nonlinear geared rotor mechanics equations

    NASA Technical Reports Server (NTRS)

    Mitchell, L. D.; David, J. W.

    1983-01-01

    The equations which describe the three-dimensional motion of an unbalanced rigid disk in a shaft system are nonlinear and contain dynamic-coupling terms. Traditionally, investigators have used an order analysis to justify ignoring the nonlinear terms in the equations of motion, producing a set of linear equations. This paper will show that, when gears are included in such a rotor system, the nonlinear dynamic-coupling terms are potentially as large as the linear terms. Because of this, one must attempt to solve the nonlinear rotor mechanics equations. A solution methodology is investigated to obtain approximate steady-state solutions to these equations. As an example of the use of the technique, a simpler set of equations is solved and the results compared to numerical simulations. These equations represent the forced, steady-state response of a spring-supported pendulum. These equations were chosen because they contain the type of nonlinear terms found in the dynamically-coupled nonlinear rotor equations. The numerical simulations indicate this method is reasonably accurate even when the nonlinearities are large.

  9. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  10. Use of multilevel modeling for determining optimal parameters of heat supply systems

    NASA Astrophysics Data System (ADS)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.

  11. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  12. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  13. Development and testing of a compartmentalized reaction network model for redox zones in contaminated aquifers

    USGS Publications Warehouse

    Abrams , Robert H.; Loague, Keith; Kent, Douglas B.

    1998-01-01

    The work reported here is the first part of a larger effort focused on efficient numerical simulation of redox zone development in contaminated aquifers. The sequential use of various electron acceptors, which is governed by the energy yield of each reaction, gives rise to redox zones. The large difference in energy yields between the various redox reactions leads to systems of equations that are extremely ill-conditioned. These equations are very difficult to solve, especially in the context of coupled fluid flow, solute transport, and geochemical simulations. We have developed a general, rational method to solve such systems where we focus on the dominant reactions, compartmentalizing them in a manner that is analogous to the redox zones that are often observed in the field. The compartmentalized approach allows us to easily solve a complex geochemical system as a function of time and energy yield, laying the foundation for our ongoing work in which we couple the reaction network, for the development of redox zones, to a model of subsurface fluid flow and solute transport. Our method (1) solves the numerical system without evoking a redox parameter, (2) improves the numerical stability of redox systems by choosing which compartment and thus which reaction network to use based upon the concentration ratios of key constituents, (3) simulates the development of redox zones as a function of time without the use of inhibition factors or switching functions, and (4) can reduce the number of transport equations that need to be solved in space and time. We show through the use of various model performance evaluation statistics that the appropriate compartment choice under different geochemical conditions leads to numerical solutions without significant error. The compartmentalized approach described here facilitates the next phase of this effort where we couple the redox zone reaction network to models of fluid flow and solute transport.

  14. Towards a global human embryonic stem cell bank.

    PubMed

    Lott, Jason P; Savulescu, Julian

    2007-08-01

    An increasingly unbridgeable gap exists between the supply and demand of transplantable organs. Human embryonic stem cell technology could solve the organ shortage problem by restoring diseased or damaged tissue across a range of common conditions. However, such technology faces several largely ignored immunological challenges in delivering cell lines to large populations. We address some of these challenges and argue in favor of encouraging contribution or intentional creation of embryos from which widely immunocompatible stem cell lines could be derived. Further, we argue that current immunological constraints in tissue transplantation demand the creation of a global stem cell bank, which may hold particular promise for minority populations and other sub-groups currently marginalized from organ procurement and allocation systems. Finally, we conclude by offering a number of practical and ethically oriented recommendations for constructing a human embryonic stem cell bank that we hope will help solve the ongoing organ shortage problem.

  15. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  16. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  17. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  18. P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems

    NASA Technical Reports Server (NTRS)

    Kang, Kab S.

    2002-01-01

    The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning

  19. A zonally symmetric model for the monsoon-Hadley circulation with stochastic convective forcing

    NASA Astrophysics Data System (ADS)

    De La Chevrotière, Michèle; Khouider, Boualem

    2017-02-01

    Idealized models of reduced complexity are important tools to understand key processes underlying a complex system. In climate science in particular, they are important for helping the community improve our ability to predict the effect of climate change on the earth system. Climate models are large computer codes based on the discretization of the fluid dynamics equations on grids of horizontal resolution in the order of 100 km, whereas unresolved processes are handled by subgrid models. For instance, simple models are routinely used to help understand the interactions between small-scale processes due to atmospheric moist convection and large-scale circulation patterns. Here, a zonally symmetric model for the monsoon circulation is presented and solved numerically. The model is based on the Galerkin projection of the primitive equations of atmospheric synoptic dynamics onto the first modes of vertical structure to represent free tropospheric circulation and is coupled to a bulk atmospheric boundary layer (ABL) model. The model carries bulk equations for water vapor in both the free troposphere and the ABL, while the processes of convection and precipitation are represented through a stochastic model for clouds. The model equations are coupled through advective nonlinearities, and the resulting system is not conservative and not necessarily hyperbolic. This makes the design of a numerical method for the solution of this system particularly difficult. Here, we develop a numerical scheme based on the operator time-splitting strategy, which decomposes the system into three pieces: a conservative part and two purely advective parts, each of which is solved iteratively using an appropriate method. The conservative system is solved via a central scheme, which does not require hyperbolicity since it avoids the Riemann problem by design. One of the advective parts is a hyperbolic diagonal matrix, which is easily handled by classical methods for hyperbolic equations, while the other advective part is a nilpotent matrix, which is solved via the method of lines. Validation tests using a synthetic exact solution are presented, and formal second-order convergence under grid refinement is demonstrated. Moreover, the model is tested under realistic monsoon conditions, and the ability of the model to simulate key features of the monsoon circulation is illustrated in two distinct parameter regimes.

  20. Influence of detector noise and background noise on detection-system

    NASA Astrophysics Data System (ADS)

    Song, Yiheng; Wang, Zhiyong

    2018-02-01

    Study the noise by detectors and background light ,we find that the influence of background noise on the detection is more than that of itself. Therefore, base on the fiber coupled beam splitting technique, the small area detector is used to replace the large area detector. It can achieve high signal-to-noise ratio (SNR) and reduce the speckle interference of the background light. This technique is expected to solve the bottleneck of large field of view and high sensitivity.

  1. Computational strategy for the solution of large strain nonlinear problems using the Wilkins explicit finite-difference approach

    NASA Technical Reports Server (NTRS)

    Hofmann, R.

    1980-01-01

    The STEALTH code system, which solves large strain, nonlinear continuum mechanics problems, was rigorously structured in both overall design and programming standards. The design is based on the theoretical elements of analysis while the programming standards attempt to establish a parallelism between physical theory, programming structure, and documentation. These features have made it easy to maintain, modify, and transport the codes. It has also guaranteed users a high level of quality control and quality assurance.

  2. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    PubMed Central

    Liu, Yanbin; Xiao, Dibo; Lu, Yuping

    2014-01-01

    With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS) was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research. PMID:24859025

  3. Comparative study on a solving model and algorithm for a flush air data sensing system.

    PubMed

    Liu, Yanbin; Xiao, Dibo; Lu, Yuping

    2014-05-23

    With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS) was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  4. Gaussian-windowed frame based method of moments formulation of surface-integral-equation for extended apertures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shlivinski, A., E-mail: amirshli@ee.bgu.ac.il; Lomakin, V., E-mail: vlomakin@eng.ucsd.edu

    2016-03-01

    Scattering or coupling of electromagnetic beam-field at a surface discontinuity separating two homogeneous or inhomogeneous media with different propagation characteristics is formulated using surface integral equation, which are solved by the Method of Moments with the aid of the Gabor-based Gaussian window frame set of basis and testing functions. The application of the Gaussian window frame provides (i) a mathematically exact and robust tool for spatial-spectral phase-space formulation and analysis of the problem; (ii) a system of linear equations in a transmission-line like form relating mode-like wave objects of one medium with mode-like wave objects of the second medium; (iii)more » furthermore, an appropriate setting of the frame parameters yields mode-like wave objects that blend plane wave properties (as if solving in the spectral domain) with Green's function properties (as if solving in the spatial domain); and (iv) a representation of the scattered field with Gaussian-beam propagators that may be used in many large (in terms of wavelengths) systems.« less

  5. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  6. Vector form Intrinsic Finite Element Method for the Two-Dimensional Analysis of Marine Risers with Large Deformations

    NASA Astrophysics Data System (ADS)

    Li, Xiaomin; Guo, Xueli; Guo, Haiyan

    2018-06-01

    Robust numerical models that describe the complex behaviors of risers are needed because these constitute dynamically sensitive systems. This paper presents a simple and efficient algorithm for the nonlinear static and dynamic analyses of marine risers. The proposed approach uses the vector form intrinsic finite element (VFIFE) method, which is based on vector mechanics theory and numerical calculation. In this method, the risers are described by a set of particles directly governed by Newton's second law and are connected by weightless elements that can only resist internal forces. The method does not require the integration of the stiffness matrix, nor does it need iterations to solve the governing equations. Due to these advantages, the method can easily increase or decrease the element and change the boundary conditions, thus representing an innovative concept of solving nonlinear behaviors, such as large deformation and large displacement. To prove the feasibility of the VFIFE method in the analysis of the risers, rigid and flexible risers belonging to two different categories of marine risers, which usually have differences in modeling and solving methods, are employed in the present study. In the analysis, the plane beam element is adopted in the simulation of interaction forces between the particles and the axial force, shear force, and bending moment are also considered. The results are compared with the conventional finite element method (FEM) and those reported in the related literature. The findings revealed that both the rigid and flexible risers could be modeled in a similar unified analysis model and that the VFIFE method is feasible for solving problems related to the complex behaviors of marine risers.

  7. Extent of reaction in open systems with multiple heterogeneous reactions

    USGS Publications Warehouse

    Friedly, John C.

    1991-01-01

    The familiar batch concept of extent of reaction is reexamined for systems of reactions occurring in open systems. Because species concentrations change as a result of transport processes as well as reactions in open systems, the extent of reaction has been less useful in practice in these applications. It is shown that by defining the extent of the equivalent batch reaction and a second contribution to the extent of reaction due to the transport processes, it is possible to treat the description of the dynamics of flow through porous media accompanied by many chemical reactions in a uniform, concise manner. This approach tends to isolate the reaction terms among themselves and away from the model partial differential equations, thereby enabling treatment of large problems involving both equilibrium and kinetically controlled reactions. Implications on the number of coupled partial differential equations necessary to be solved and on numerical algorithms for solving such problems are discussed. Examples provided illustrate the theory applied to solute transport in groundwater flow.

  8. GPU computing with Kaczmarz’s and other iterative algorithms for linear systems

    PubMed Central

    Elble, Joseph M.; Sahinidis, Nikolaos V.; Vouzis, Panagiotis

    2009-01-01

    The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz’s, Cimmino’s, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method. PMID:20526446

  9. Position measurement of the direct drive motor of Large Aperture Telescope

    NASA Astrophysics Data System (ADS)

    Li, Ying; Wang, Daxing

    2010-07-01

    Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).

  10. Technology and testing.

    PubMed

    Quellmalz, Edys S; Pellegrino, James W

    2009-01-02

    Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.

  11. System principles, mathematical models and methods to ensure high reliability of safety systems

    NASA Astrophysics Data System (ADS)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  12. A finite element formulation for scattering from electrically large 2-dimensional structures

    NASA Technical Reports Server (NTRS)

    Ross, Daniel C.; Volakis, John L.

    1992-01-01

    A finite element formulation is given using the scattered field approach with a fictitious material absorber to truncate the mesh. The formulation includes the use of arbitrary approximation functions so that more accurate results can be achieved without any modification to the software. Additionally, non-polynomial approximation functions can be used, including complex approximation functions. The banded system that results is solved with an efficient sparse/banded iterative scheme and as a consequence, large structures can be analyzed. Results are given for simple cases to verify the formulation and also for large, complex geometries.

  13. Fourier's law for quasi-one-dimensional chaotic quantum systems

    NASA Astrophysics Data System (ADS)

    Seligman, Thomas H.; Weidenmüller, Hans A.

    2011-05-01

    We derive Fourier's law for a completely coherent quasi-one-dimensional chaotic quantum system coupled locally to two heat baths at different temperatures. We solve the master equation to first order in the temperature difference. We show that the heat conductance can be expressed as a thermodynamic equilibrium coefficient taken at some intermediate temperature. We use that expression to show that for temperatures large compared to the mean level spacing of the system, the heat conductance is inversely proportional to the level density and, thus, inversely proportional to the length of the system.

  14. Service Discovery Oriented Management System Construction Method

    NASA Astrophysics Data System (ADS)

    Li, Huawei; Ren, Ying

    2017-10-01

    In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.

  15. Example Based Pedagogical Strategies in a Computer Science Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Green, Nicholas

    2017-01-01

    Worked-out examples are a common teaching strategy that aids learners in understanding concepts by use of step-by-step instruction. Literature has shown that they can be extremely beneficial, with a large body of material showing they can provide benefits over regular problem solving alone. This research looks into the viability of using this…

  16. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  17. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  18. Interior tomography from differential phase contrast data via Hilbert transform based on spline functions

    NASA Astrophysics Data System (ADS)

    Yang, Qingsong; Cong, Wenxiang; Wang, Ge

    2016-10-01

    X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.

  19. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  20. The large area crop inventory experiment: An experiment to demonstrate how space-age technology can contribute to solving critical problems here on earth

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The large area crop inventory experiment is being developed to predict crop production through satellite photographs. This experiment demonstrates how space age technology can contribute to solving practical problems of agriculture management.

  1. Life In a large scientific collaboration

    NASA Astrophysics Data System (ADS)

    Pravahan, Rishiraj

    2011-03-01

    I will be talking about life in a large scientific collaboration. The dynamics of dealing with many groups, collaborating with people from various linguistic and cultural origins can be a daunting experience. However, it is exactly this diversity of culture and learning that can make it an invigorating journey. You need to find your place in terms of professional contribution as well as personal liaisons to be productive and innovative in a large work culture. Scientific problems today are not solved by one person hunched over an old notebook. It is solved by sharing computer codes, experimental infrastructure and your questions over coffee with your colleagues. An affinity to take in and impart healthy criticism is a must for productive throughput of work. I will discuss all these aspects as well as issues that may arise from adjusting to a new country, customs, food, transportation or health-care system. The purpose of the talk is to familiarize you with what I have learned through my past five years of stay at CERN and working in the ATLAS collaboration.

  2. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  3. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  4. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  5. Data mining for water resource management part 2 - methods and approaches to solving contemporary problems

    USGS Publications Warehouse

    Roehl, Edwin A.; Conrads, Paul

    2010-01-01

    This is the second of two papers that describe how data mining can aid natural-resource managers with the difficult problem of controlling the interactions between hydrologic and man-made systems. Data mining is a new science that assists scientists in converting large databases into knowledge, and is uniquely able to leverage the large amounts of real-time, multivariate data now being collected for hydrologic systems. Part 1 gives a high-level overview of data mining, and describes several applications that have addressed major water resource issues in South Carolina. This Part 2 paper describes how various data mining methods are integrated to produce predictive models for controlling surface- and groundwater hydraulics and quality. The methods include: - signal processing to remove noise and decompose complex signals into simpler components; - time series clustering that optimally groups hundreds of signals into "classes" that behave similarly for data reduction and (or) divide-and-conquer problem solving; - classification which optimally matches new data to behavioral classes; - artificial neural networks which optimally fit multivariate data to create predictive models; - model response surface visualization that greatly aids in understanding data and physical processes; and, - decision support systems that integrate data, models, and graphics into a single package that is easy to use.

  6. Superconducting Magnet Shielding of Astronauts from Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Fisher, Peter; Hoffman, Jeffrey; Zhou, Feng; Batishchev, Oleg

    2004-11-01

    Protecting astronauts traveling outside the Earth's protective magnetic field from cosmic and solar radiation [1] is one of the critical problems that must be solved in order to realize the nation's new human space exploration vision. Superconducting magnets, such as those under construction for the ATLAS experiment [2] at CERN, have achieved sufficient size to be able to surround a reasonable habitable volume, and their field strength is high enough to deflect a significant portion of the incoming radiation. We have undertaken a research effort aimed at developing an accurate numerical model of a crew compartment surrounded by a large magnetic field, with which we can calculate the effect on incoming charged particles. We will use this model to optimize the magnetic configuration to produce the maximum shielding effect while minimizing the mass of the superconducting magnet system. We are also investigating some of the practical problems that must be solved if large, superconducting magnet systems are to be incorporated into human space systems. We will present preliminary results of our modeling, showing the reduction of radiation exposure as a function of energy and atomic species. [1] Review of Particle Physics, Ed. Particle Data Group, Phys. Lett. B, 1-4 (592) 1-1109, 2004 [2] http://atlasexperiment.org/

  7. Using Coaching to Improve the Teaching of Problem Solving to Year 8 Students in Mathematics

    ERIC Educational Resources Information Center

    Kargas, Christine Anestis; Stephens, Max

    2014-01-01

    This study investigated how to improve the teaching of problem solving in a large Melbourne secondary school. Coaching was used to support and equip five teachers, some with limited experiences in teaching problem solving, with knowledge and strategies to build up students' problem solving and reasoning skills. The results showed increased…

  8. Black start research of the wind and storage system based on the dual master-slave control

    NASA Astrophysics Data System (ADS)

    Leng, Xue; Shen, Li; Hu, Tian; Liu, Li

    2018-02-01

    Black start is the key to solving the problem of large-scale power failure, while the introduction of new renewable clean energy as a black start power supply was a new hotspot. Based on the dual master-slave control strategy, the wind and storage system was taken as the black start reliable power, energy storage and wind combined to ensure the stability of the micorgrid systems, to realize the black start. In order to obtain the capacity ratio of the storage in the small system based on the dual master-slave control strategy, and the black start constraint condition of the wind and storage combined system, obtain the key points of black start of wind storage combined system, but also provide reference and guidance for the subsequent large-scale wind and storage combined system in black start projects.

  9. Security System Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.

  10. SUBOPT: A CAD program for suboptimal linear regulators

    NASA Technical Reports Server (NTRS)

    Fleming, P. J.

    1985-01-01

    An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.

  11. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  12. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    PubMed

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  13. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    NASA Astrophysics Data System (ADS)

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio

    2015-06-01

    Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.

  14. Crowdsourcing for bioinformatics

    PubMed Central

    Good, Benjamin M.; Su, Andrew I.

    2013-01-01

    Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume ‘microtasks’ and systems for solving high-difficulty ‘megatasks’. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches. Contact: bgood@scripps.edu PMID:23782614

  15. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    NASA Astrophysics Data System (ADS)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  16. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  17. Electroneutral models for dynamic Poisson-Nernst-Planck systems

    NASA Astrophysics Data System (ADS)

    Song, Zilong; Cao, Xiulei; Huang, Huaxiong

    2018-01-01

    The Poisson-Nernst-Planck (PNP) system is a standard model for describing ion transport. In many applications, e.g., ions in biological tissues, the presence of thin boundary layers poses both modeling and computational challenges. In this paper, we derive simplified electroneutral (EN) models where the thin boundary layers are replaced by effective boundary conditions. There are two major advantages of EN models. First, it is much cheaper to solve them numerically. Second, EN models are easier to deal with compared to the original PNP system; therefore, it would also be easier to derive macroscopic models for cellular structures using EN models. Even though the approach used here is applicable to higher-dimensional cases, this paper mainly focuses on the one-dimensional system, including the general multi-ion case. Using systematic asymptotic analysis, we derive a variety of effective boundary conditions directly applicable to the EN system for the bulk region. This EN system can be solved directly and efficiently without computing the solution in the boundary layer. The derivation is based on matched asymptotics, and the key idea is to bring back higher-order contributions into the effective boundary conditions. For Dirichlet boundary conditions, the higher-order terms can be neglected and the classical results (continuity of electrochemical potential) are recovered. For flux boundary conditions, higher-order terms account for the accumulation of ions in boundary layer and neglecting them leads to physically incorrect solutions. To validate the EN model, numerical computations are carried out for several examples. Our results show that solving the EN model is much more efficient than the original PNP system. Implemented with the Hodgkin-Huxley model, the computational time for solving the EN model is significantly reduced without sacrificing the accuracy of the solution due to the fact that it allows for relatively large mesh and time-step sizes.

  18. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  19. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  20. A Roadmap of Innovative Nuclear Energy System

    NASA Astrophysics Data System (ADS)

    Sekimoto, Hiroshi

    2017-01-01

    Nuclear is a dense energy without CO2 emission. It can be used for more than 100,000 years using fast breeder reactors with uranium from the sea. However, it raises difficult problems associated with severe accidents, spent fuel waste and nuclear threats, which should be solved with acceptable costs. Some innovative reactors have attracted interest, and many designs have been proposed for small reactors. These reactors are considered much safer than conventional large reactors and have fewer technical obstructions. Breed-and-burn reactors have high potential to solve all inherent problems for peaceful use of nuclear energy. However, they have some technical problems with materials. A roadmap for innovative reactors is presented herein.

  1. The evolution and practical application of machine translation system (1)

    NASA Astrophysics Data System (ADS)

    Tominaga, Isao; Sato, Masayuki

    This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.

  2. The quest for solvable multistate Landau-Zener models

    DOE PAGES

    Sinitsyn, Nikolai A.; Chernyak, Vladimir Y.

    2017-05-24

    Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. Additionally, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing.

  3. System of HPC content archiving

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Ivashchenko, A.

    2017-12-01

    This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.

  4. What Is the Real Efficiency of Bulbs?

    ERIC Educational Resources Information Center

    Polacek, Lubos

    2012-01-01

    Bulbs are considered to be very inefficient sources of light. Bulbs give light and heat. As we use them for a long time, especially in winter, a large part of the heat produced by bulbs lowers the power consumption of the heating system. In this paper the problem of the real efficiency of a bulb is solved when both the lighting and heating effects…

  5. Therapeutic communities, old and new.

    PubMed

    Jones, M

    1979-01-01

    The author attempts to clarify two largely different uses of term, Therapeutic Community (TC). By "old" TC he describes a movement which originated in psychiatry in the United Kingdom at the end of World War II. This was an attempt to establish a democratic system in hospitals where the domination of the doctors was replaced by open communication of content and feeling, information sharing, shared decision making, and problem solving shared as far as possible with all patients and staff. Daily meetings of all patients and staff formed the nucleus of this process. In recent years developments in the areas of systems theory, learning theory, and organization development have contributed to a better understanding of social organization and change. The "new" TCs derive from the more recent developments in the treatment of substance abuse. Central to this movement is Synanon and its many modification which use the clients' peer group to solve their own problems, largely eliminating mental health professionals. Linked with these "new" TCs is the development of Asklepieion units in prisons, which use Synanon "games" along with transactional analysis. An attempt is made to distinguish the methodologies used in TCs, "old" and "new".

  6. Baculovirus-mediated expression of GPCRs in insect cells.

    PubMed

    Saarenpää, Tuulia; Jaakola, Veli-Pekka; Goldman, Adrian

    2015-01-01

    G-protein-coupled receptors (GPCRs) are a large family of seven transmembrane proteins that influence a considerable number of cellular events. For this reason, they are one of the most studied receptor types for their pharmacological and structural properties. Solving the structure of several GPCR receptor types has been possible using almost all expression systems, including Escherichia coli, yeast, mammalian, and insect cells. So far, however, most of the GPCR structures solved have been done using the baculovirus insect cell expression system. The reason for this is mainly due to cost-effectiveness, posttranslational modification efficiency, and overall effortless maintenance. The system has evolved so much that variables starting from vector type, purification tags, cell line, and growth conditions can be varied and optimized countless ways to suit the needs of new constructs. Here, we present the array of techniques that enable the rapid and efficient optimization of expression steps for maximal protein quality and quantity, including our emendations. © 2015 Elsevier Inc. All rights reserved.

  7. Exploring the potential energy landscape over a large parameter-space

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru

    2013-07-01

    Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.

  8. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  9. A model for cell migration in non-isotropic fibrin networks with an application to pancreatic tumor islets.

    PubMed

    Chen, Jiao; Weihs, Daphne; Vermolen, Fred J

    2018-04-01

    Cell migration, known as an orchestrated movement of cells, is crucially important for wound healing, tumor growth, immune response as well as other biomedical processes. This paper presents a cell-based model to describe cell migration in non-isotropic fibrin networks around pancreatic tumor islets. This migration is determined by the mechanical strain energy density as well as cytokines-driven chemotaxis. Cell displacement is modeled by solving a large system of ordinary stochastic differential equations where the stochastic parts result from random walk. The stochastic differential equations are solved by the use of the classical Euler-Maruyama method. In this paper, the influence of anisotropic stromal extracellular matrix in pancreatic tumor islets on T-lymphocytes migration in different immune systems is investigated. As a result, tumor peripheral stromal extracellular matrix impedes the immune response of T-lymphocytes through changing direction of their migration.

  10. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  11. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  12. Reconstructing high-dimensional two-photon entangled states via compressive sensing

    PubMed Central

    Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan

    2014-01-01

    Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850

  13. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  14. Simple waves in a two-component Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Ivanov, S. K.; Kamchatnov, A. M.

    2018-04-01

    We study the dynamics of so-called simple waves in a two-component Bose-Einstein condensate. The evolution of the condensate is described by Gross-Pitaevskii equations which can be reduced for these simple wave solutions to a system of ordinary differential equations which coincide with those derived by Ovsyannikov for the two-layer fluid dynamics. We solve the Ovsyannikov system for two typical situations of large and small difference between interspecies and intraspecies nonlinear interaction constants. Our analytic results are confirmed by numerical simulations.

  15. A mixed finite-element method for solving the poroelastic Biot equations with electrokinetic coupling

    NASA Astrophysics Data System (ADS)

    Pain, C. C.; Saunders, J. H.; Worthington, M. H.; Singer, J. M.; Stuart-Bruges, W.; Mason, G.; Goddard, A.

    2005-02-01

    In this paper, a numerical method for solving the Biot poroelastic equations is developed. These equations comprise acoustic (typically water) and elastic (porous medium frame) equations, which are coupled mainly through fluid/solid drag terms. This wave solution is coupled to a simplified form of Maxwell's equations, which is solved for the streaming potential resulting from electrokinesis. The ultimate aim is to use the generated electrical signals to provide porosity, permeability and other information about the formation surrounding a borehole. The electrical signals are generated through electrokinesis by seismic waves causing movement of the fluid through pores or fractures of a porous medium. The focus of this paper is the numerical solution of the Biot equations in displacement form, which is achieved using a mixed finite-element formulation with a different finite-element representation for displacements and stresses. The mixed formulation is used in order to reduce spurious displacement modes and fluid shear waves in the numerical solutions. These equations are solved in the time domain using an implicit unconditionally stable time-stepping method using iterative solution methods amenable to solving large systems of equations. The resulting model is embodied in the MODELLING OF ACOUSTICS, POROELASTICS AND ELECTROKINETICS (MAPEK) computer model for electroseismic analysis.

  16. Facilitating access to information in large documents with an intelligent hypertext system

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1993-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.

  17. Accelerating Subsurface Transport Simulation on Heterogeneous Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino

    Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy,more » LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272 and a Tesla M2090 per node.« less

  18. On Spurious Numerics in Solving Reactive Equations

    NASA Technical Reports Server (NTRS)

    Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.

    2013-01-01

    The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.

  19. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  20. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  1. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  2. Differential Relations between Facets of Complex Problem Solving and Students' Immigration Background

    ERIC Educational Resources Information Center

    Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain

    2014-01-01

    Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…

  3. Problem Solving and the Development of Expertise in Management.

    ERIC Educational Resources Information Center

    Lash, Fredrick B.

    This study investigated novice and expert problem solving behavior in management to examine the role of domain specific knowledge on problem solving processes. Forty-one middle level marketing managers in a large petrochemical organization provided think aloud protocols in response to two hypothetical management scenarios. Protocol analysis…

  4. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  5. Planning meals: Problem-solving on a real data-base

    ERIC Educational Resources Information Center

    Byrne, Richard

    1977-01-01

    Planning the menu for a dinner party, which involves problem-solving with a large body of knowledge, is used to study the daily operation of human memory. Verbal protocol analysis, a technique devised to investigate formal problem-solving, is examined theoretically and adapted for analysis of this task. (Author/MV)

  6. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  7. Inelastic Transitions in Slow Collisions of Anti-Hydrogen with Hydrogen Atoms

    NASA Astrophysics Data System (ADS)

    Harrison, Robert; Krstic, Predrag

    2007-06-01

    We calculate excited adiabatic states and nonadiabatic coupling matrix elements of a quasimolecular system containing hydrogen and anti-hydrogen atoms, for a range of internuclear distances from 0.2 to 20 Bohrs. High accuracy is achieved by exact diagonalization of the molecular Hamiltionian in a large Gaussian basis. Nonadiabatic dynamics was calculated by solving MOCC equations. Positronium states are included in the consideration.

  8. Determination of aerodynamic sensitivity coefficients for wings in transonic flow

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.; El-Banna, Hesham M.

    1992-01-01

    The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.

  9. Two atoms in an anisotropic harmonic trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Z.; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, 02-668 Warsaw; Calarco, T.

    2005-05-15

    We consider the system of two interacting atoms confined in axially symmetric harmonic trap. Within the pseudopotential approximation, we solve the Schroedinger equation exactly, discussing the limits of quasi-one-and quasi-two-dimensional geometries. Finally, we discuss the application of an energy-dependent pseudopotential, which allows us to extend the validity of our results to the case of tight traps and large scattering lengths.

  10. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  11. "Four and Twenty Blackbirds": How Transcoding Ability Mediates the Relationship between Visuospatial Working Memory and Math in a Language with Inversion

    ERIC Educational Resources Information Center

    van der Ven, Sanne H. G.; Klaiber, Jonathan D.; van der Maas, Han L. J.

    2017-01-01

    Writing down spoken number words (transcoding) is an ability that is predictive of math performance and related to working memory ability. We analysed these relationships in a large sample of over 25,000 children, from kindergarten to the end of primary school, who solved transcoding items with a computer adaptive system. Furthermore, we…

  12. A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.

  13. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  14. A Distributed Algorithm for Economic Dispatch Over Time-Varying Directed Networks With Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Lu, Jie; Wu, Di

    In power system operation, economic dispatch problem (EDP) is designed to minimize the total generation cost while meeting the demand and satisfying generator capacity limits. This paper proposes an algorithm based on the gradient-push method to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays. It has been shown that the proposed method is guaranteed to solve the EDP if the time-varying directed communication network is uniformly jointly strongly connected. Moreover, the proposed algorithm is also able to handle arbitrarily large but bounded time delays on communication links. Numerical simulations are usedmore » to illustrate and validate the proposed algorithm.« less

  15. On linearization and preconditioning for radiation diffusion coupled to material thermal conduction equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao, E-mail: fengtao2@mail.ustc.edu.cn; Graduate School of China Academy Engineering Physics, Beijing 100083; An, Hengbin, E-mail: an_hengbin@iapcm.ac.cn

    2013-03-01

    Jacobian-free Newton–Krylov (JFNK) method is an effective algorithm for solving large scale nonlinear equations. One of the most important advantages of JFNK method is that there is no necessity to form and store the Jacobian matrix of the nonlinear system when JFNK method is employed. However, an approximation of the Jacobian is needed for the purpose of preconditioning. In this paper, JFNK method is employed to solve a class of non-equilibrium radiation diffusion coupled to material thermal conduction equations, and two preconditioners are designed by linearizing the equations in two methods. Numerical results show that the two preconditioning methods canmore » improve the convergence behavior and efficiency of JFNK method.« less

  16. Inflationary dynamics for matrix eigenvalue problems

    PubMed Central

    Heller, Eric J.; Kaplan, Lev; Pollmann, Frank

    2008-01-01

    Many fields of science and engineering require finding eigenvalues and eigenvectors of large matrices. The solutions can represent oscillatory modes of a bridge, a violin, the disposition of electrons around an atom or molecule, the acoustic modes of a concert hall, or hundreds of other physical quantities. Often only the few eigenpairs with the lowest or highest frequency (extremal solutions) are needed. Methods that have been developed over the past 60 years to solve such problems include the Lanczos algorithm, Jacobi–Davidson techniques, and the conjugate gradient method. Here, we present a way to solve the extremal eigenvalue/eigenvector problem, turning it into a nonlinear classical mechanical system with a modified Lagrangian constraint. The constraint induces exponential inflationary growth of the desired extremal solutions. PMID:18511564

  17. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    NASA Astrophysics Data System (ADS)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  18. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  19. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  20. The Development of a Small-World Network of Higher Education Students, Using a Large-Group Problem-Solving Method

    ERIC Educational Resources Information Center

    Sousa, Fernando Cardoso; Monteiro, Ileana Pardal; Pellissier, René

    2014-01-01

    This article presents the development of a small-world network using an adapted version of the large-group problem-solving method "Future Search." Two management classes in a higher education setting were selected and required to plan a project. The students completed a survey focused on the frequency of communications before and after…

  1. Numerical method for the solution of large systems of differential equations of the boundary layer type

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Nachtsheim, P. R.

    1972-01-01

    A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.

  2. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  3. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  4. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  5. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    DOE PAGES

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...

    2015-06-08

    Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less

  6. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris

    Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less

  7. On data modeling for neurological application

    NASA Astrophysics Data System (ADS)

    Woźniak, Karol; Mulawka, Jan

    The aim of this paper is to design and implement information system containing large database dedicated to support neurological-psychiatric examinations focused on human brain after stroke. This approach encompasses the following steps: analysis of software requirements, presentation of the problem solving concept, design and implementation of the final information system. Certain experiments were performed in order to verify the correctness of the project ideas. The approach can be considered as an interdisciplinary venture. Elaboration of the system architecture, data model and the tools supporting medical examinations are provided. The achievement of the design goals is demonstrated in the final conclusion.

  8. Cost-effective use of minicomputers to solve structural problems

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Foster, E. P.

    1978-01-01

    Minicomputers are receiving increased use throughout the aerospace industry. Until recently, their use focused primarily on process control and numerically controlled tooling applications, while their exposure to and the opportunity for structural calculations has been limited. With the increased availability of this computer hardware, the question arises as to the feasibility and practicality of carrying out comprehensive structural analysis on a minicomputer. This paper presents results on the potential for using minicomputers for structural analysis by (1) selecting a comprehensive, finite-element structural analysis system in use on large mainframe computers; (2) implementing the system on a minicomputer; and (3) comparing the performance of the minicomputers with that of a large mainframe computer for the solution to a wide range of finite element structural analysis problems.

  9. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  10. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  11. Revisiting software specification and design for large astronomy projects

    NASA Astrophysics Data System (ADS)

    Wiant, Scott; Berukoff, Steven

    2016-07-01

    The separation of science and engineering in the delivery of software systems overlooks the true nature of the problem being solved and the organization that will solve it. Use of a systems engineering approach to managing the requirements flow between these two groups as between a customer and contractor has been used with varying degrees of success by well-known entities such as the U.S. Department of Defense. However, treating science as the customer and engineering as the contractor fosters unfavorable consequences that can be avoided and opportunities that are missed. For example, the "problem" being solved is only partially specified through the requirements generation process since it focuses on detailed specification guiding the parties to a technical solution. Equally important is the portion of the problem that will be solved through the definition of processes and staff interacting through them. This interchange between people and processes is often underrepresented and under appreciated. By concentrating on the full problem and collaborating on a strategy for its solution a science-implementing organization can realize the benefits of driving towards common goals (not just requirements) and a cohesive solution to the entire problem. The initial phase of any project when well executed is often the most difficult yet most critical and thus it is essential to employ a methodology that reinforces collaboration and leverages the full suite of capabilities within the team. This paper describes an integrated approach to specifying the needs induced by a problem and the design of its solution.

  12. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  13. Some Applications of Algebraic System Solving

    ERIC Educational Resources Information Center

    Roanes-Lozano, Eugenio

    2011-01-01

    Technology and, in particular, computer algebra systems, allows us to change both the way we teach mathematics and the mathematical curriculum. Curiously enough, unlike what happens with linear system solving, algebraic system solving is not widely known. The aim of this paper is to show that, although the theory lying behind the "exact…

  14. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  15. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  16. BROAD SPECTRUM ANALYSIS FOR TRACE ORGANIC POLLUTANTS IN LARGE VOLUMES OF WATER BY XAD RESINS-COLUMN DESIGN-FACTS AND MYTHS.

    USGS Publications Warehouse

    Gibs, J.; Wicklund, A.; Suffet, I.H.

    1986-01-01

    The 'rule of thumb' that large volumes of water can be sampled for trace organic pollutants by XAD resin columns which are designed by small column laboratory studies or pure compounds is examined and shown to be a problem. A theory of multicomponent breakthrough is presented as a frame of reference to help solve the problem and develop useable criteria to aid the design of resin columns. An important part of the theory is the effect of humic substances on the breakthrough character of multicomponent chemical systems.

  17. Exact results in the large system size limit for the dynamics of the chemical master equation, a one dimensional chain of equations.

    PubMed

    Martirosyan, A; Saakian, David B

    2011-08-01

    We apply the Hamilton-Jacobi equation (HJE) formalism to solve the dynamics of the chemical master equation (CME). We found exact analytical expressions (in large system-size limit) for the probability distribution, including explicit expression for the dynamics of variance of distribution. We also give the solution for some simple cases of the model with time-dependent rates. We derived the results of the Van Kampen method from the HJE approach using a special ansatz. Using the Van Kampen method, we give a system of ordinary differential equations (ODEs) to define the variance in a two-dimensional case. We performed numerics for the CME with stationary noise. We give analytical criteria for the disappearance of bistability in the case of stationary noise in one-dimensional CMEs.

  18. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    PubMed Central

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2016-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950

  19. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for two-phase flow problems with strong heterogeneities and anisotropies is studied. Here we consider both possibilities. Moreover we present a novel way for constructing the coarse grid correction operator in linear multigrid algorithms. This approach has the advantage in that it preserves the sparsity pattern of the fine grid matrix and it can be extended to systems of equations in a straightforward manner. We compare the linear and nonlinear multigrid algorithms by means of a numerical experiment.

  20. CFD-ACE+: a CAD system for simulation and modeling of MEMS

    NASA Astrophysics Data System (ADS)

    Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha

    1999-03-01

    Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.

  1. Semiannual Report for Contract NAS1-19480 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1994-06-01

    algorithms for large, irreducibly coupled systems iteratively solve concurrent problems within different subspaces of a Hilbert space, or within different...effective on problems amenable to SIMD solution. Together with researchers at AT&T Bell Labs (Boris Lubachevsky, Albert Greenberg ) we have developed...reasonable measurement. In the study of different speedups, various causes of superlinear speedup are also presented. Greenberg , Albert G., Boris D

  2. Plan Debugging Using Approximate Domain Theories.

    DTIC Science & Technology

    1995-03-01

    compelling suggestion that generative plan- ning systems solving large problems will need to exploit the control information implicit in uncertain...control information implicit in uncertain information may well lead the planner to expand one portion of a plan at one point, and a separate portion of...solutions that have been proposed are to abandon declarativism (as suggested in the work on situated automata theory and its variants [1, 16, 56, 72

  3. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  4. Optimizing spread dynamics on graphs by message passing

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  5. X-Ray Structure determination of the Glycine Cleavage System Protein H of Mycobacterium tuberculosis Using An Inverse Compton Synchrotron X-Ray Source

    PubMed Central

    Abendroth, Jan; McCormick, Michael S.; Edwards, Thomas E.; Staker, Bart; Loewen, Roderick; Gifford, Martin; Rifkin, Jeff; Mayer, Chad; Guo, Wenjin; Zhang, Yang; Myler, Peter; Kelley, Angela; Analau, Erwin; Hewitt, Stephen Nakazawa; Napuli, Alberto J.; Kuhn, Peter; Ruth, Ronald D.; Stewart, Lance J.

    2010-01-01

    Structural genomics discovery projects require ready access to both X-ray and NMR instrumentation which support the collection of experimental data needed to solve large numbers of novel protein structures. The most productive X-ray crystal structure determination laboratories make extensive frequent use of tunable synchrotron X-ray light to solve novel structures by anomalous diffraction methods. This requires that frozen cryo-protected crystals be shipped to large government-run synchrotron facilities for data collection. In an effort to eliminate the need to ship crystals for data collection, we have developed the first laboratory-scale synchrotron light source capable of performing many of the state-of-the-art synchrotron applications in X-ray science. This Compact Light Source is a first-in-class device that uses inverse Compton scattering to generate X-rays of sufficient flux, tunable wavelength and beam size to allow high-resolution X-ray diffraction data collection from protein crystals. We report on benchmarking tests of X-ray diffraction data collection with hen egg white lysozyme, and the successful high-resolution X-ray structure determination of the Glycine cleavage system protein H from Mycobacterium tuberculosis using diffraction data collected with the Compact Light Source X-ray beam. PMID:20364333

  6. Accelerating large scale Kohn-Sham density functional theory calculations with semi-local functionals and hybrid functionals

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.

  7. Formalism for the solution of quadratic Hamiltonians with large cosine terms

    NASA Astrophysics Data System (ADS)

    Ganeshan, Sriram; Levin, Michael

    2016-02-01

    We consider quantum Hamiltonians of the form H =H0-U ∑jcos(Cj) , where H0 is a quadratic function of position and momentum variables {x1,p1,x2,p2,⋯} and the Cj's are linear in these variables. We allow H0 and Cj to be completely general with only two restrictions: we require that (1) the Cj's are linearly independent and (2) [Cj,Ck] is an integer multiple of 2 π i for all j ,k so that the different cosine terms commute with one another. Our main result is a recipe for solving these Hamiltonians and obtaining their exact low-energy spectrum in the limit U →∞ . This recipe involves constructing creation and annihilation operators and is similar in spirit to the procedure for diagonalizing quadratic Hamiltonians. In addition to our exact solution in the infinite U limit, we also discuss how to analyze these systems when U is large but finite. Our results are relevant to a number of different physical systems, but one of the most natural applications is to understanding the effects of electron scattering on quantum Hall edge modes. To demonstrate this application, we use our formalism to solve a toy model for a fractional quantum spin Hall edge with different types of impurities.

  8. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  9. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  10. Learning Analysis of K-12 Students' Online Problem Solving: A Three-Stage Assessment Approach

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Problem solving is considered a fundamental human skill. However, large-scale assessment of problem solving in K-12 education remains a challenging task. Researchers have argued for the development of an enhanced assessment approach through joint effort from multiple disciplines. In this study, a three-stage approach based on an evidence-centered…

  11. Identifying barriers to recovery from work related upper extremity disorders: use of a collaborative problem solving technique.

    PubMed

    Shaw, William S; Feuerstein, Michael; Miller, Virginia I; Wood, Patricia M

    2003-08-01

    Improving health and work outcomes for individuals with work related upper extremity disorders (WRUEDs) may require a broad assessment of potential return to work barriers by engaging workers in collaborative problem solving. In this study, half of all nurse case managers from a large workers' compensation system were randomly selected and invited to participate in a randomized, controlled trial of an integrated case management (ICM) approach for WRUEDs. The focus of ICM was problem solving skills training and workplace accommodation. Volunteer nurses attended a 2 day ICM training workshop including instruction in a 6 step process to engage clients in problem solving to overcome barriers to recovery. A chart review of WRUED case management reports (n = 70) during the following 2 years was conducted to extract case managers' reports of barriers to recovery and return to work. Case managers documented from 0 to 21 barriers per case (M = 6.24, SD = 4.02) within 5 domains: signs and symptoms (36%), work environment (27%), medical care (13%), functional limitations (12%), and coping (12%). Compared with case managers who did not receive the training (n = 67), workshop participants identified more barriers related to signs and symptoms, work environment, functional limitations, and coping (p < .05), but not to medical care. Problem solving skills training may help focus case management services on the most salient recovery factors affecting return to work.

  12. Study on the temperature field of large-sized sapphire single crystal furnace

    NASA Astrophysics Data System (ADS)

    Zhai, J. P.; Jiang, J. W.; Liu, K. G.; Peng, X. B.; Jian, D. L.; Li, I. L.

    2018-01-01

    In this paper, the temperature field of large-sized (120kg, 200kg and 300kg grade) sapphire single crystal furnace was simulated. By keeping the crucible diameter ratio and the insulation system unchanged, the power consumption, axial and radial temperature gradient, solid-liquid surface shape, stress distribution and melt flow were studied. The simulation results showed that with the increase of the single crystal furnace size, the power consumption increased, the temperature field insulation effect became worse, the growth stress value increased and the stress concentration phenomenon occurred. To solve these problems, the middle and bottom insulation system should be enhanced during designing the large-sized sapphire single crystal furnace. The appropriate radial and axial temperature gradient was favorable to reduce the crystal stress and prevent the occurrence of cracking. Expanding the interface between the seed and crystal was propitious to avoid the stress accumulation phenomenon.

  13. Design Aspects of the Rayleigh Convection Code

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.

    2017-12-01

    Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.

  14. Computationally Efficient Modeling and Simulation of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Jain, Jitesh (Inventor); Koh, Cheng-Kok (Inventor); Balakrishnan, Vankataramanan (Inventor); Cauley, Stephen F (Inventor); Li, Hong (Inventor)

    2014-01-01

    A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, including a processor, and a memory, the processor configured to perform obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for the interconnect structure, the element values for each matrix including inductance L and inverse capacitance P, obtaining an adjacency matrix A associated with the interconnect structure, storing the matrices X, Y, and A in the memory, and performing numerical integration to solve first and second equations.

  15. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  16. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  17. Optimal placement of excitations and sensors for verification of large dynamical systems

    NASA Technical Reports Server (NTRS)

    Salama, M.; Rose, T.; Garba, J.

    1987-01-01

    The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.

  18. A numerical method for solving systems of linear ordinary differential equations with rapidly oscillating solutions

    NASA Technical Reports Server (NTRS)

    Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.

    1992-01-01

    The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.

  19. Mars Rover imaging systems and directional filtering

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1989-01-01

    Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).

  20. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  1. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  2. New Ideas on the Design of the Web-Based Learning System Oriented to Problem Solving from the Perspective of Question Chain and Learning Community

    ERIC Educational Resources Information Center

    Zhang, Yin; Chu, Samuel K. W.

    2016-01-01

    In recent years, a number of models concerning problem solving systems have been put forward. However, many of them stress on technology and neglect the research of problem solving itself, especially the learning mechanism related to problem solving. In this paper, we analyze the learning mechanism of problem solving, and propose that when…

  3. Size effects of solvent molecules on the phase behavior and effective interaction of colloidal systems with the bridging attraction.

    PubMed

    Chen, Jie; Wang, Xuewu; Kline, Steven R; Liu, Yun

    2016-11-16

    There has been much recent research interest towards understanding the phase behavior of colloidal systems interacting with a bridging attraction, where the small solvent particles and large solute colloidal particles can be reversibly associated with each other. These systems show interesting phase behavior compared to the more widely studied depletion attraction systems. Here, we use Baxter's two-component sticky hard sphere model with a Percus-Yevick closure to solve the Ornstein-Zernike equation and study the size effect on colloidal systems with bridging attractions. The spinodal decomposition regions, percolation transition boundaries and binodal regions are systematically investigated as a function of the relative size of the small solvent and large solute particles as well as the attraction strength between the small and large particles. In the phase space determined by the concentrations of small and large particles, the spinodal and binodal regions form isolated islands. The locations and shapes of the spinodal and binodal regions sensitively depend on the relative size of the small and large particles and the attraction strength between them. The percolation region shrinks by decreasing the size ratio, while the binodal region slightly expands with the decrease of the size ratio. Our results are very important in understanding the phase behavior for a bridging attraction colloidal system, a model system that provides insight into oppositely charged colloidal systems, protein phase behavior, and colloidal gelation mechanisms.

  4. Best geoscience approach to complex systems in environment

    NASA Astrophysics Data System (ADS)

    Mezemate, Yacine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    The environment is a social issue that continues to grow in importance. Its complexity, both cross-disciplinary and multi-scale, has given rise to a large number of scientific and technological locks, that complex systems approaches can solve. Significant challenges must met to achieve the understanding of the environmental complexes systems. There study should proceed in some steps in which the use of data and models is crucial: - Exploration, observation and basic data acquisition - Identification of correlations, patterns, and mechanisms - Modelling - Model validation, implementation and prediction - Construction of a theory Since the e-learning becomes a powerful tool for knowledge and best practice shearing, we use it to teach the environmental complexities and systems. In this presentation we promote the e-learning course dedicated for a large public (undergraduates, graduates, PhD students and young scientists) which gather and puts in coherence different pedagogical materials of complex systems and environmental studies. This course describes a complex processes using numerous illustrations, examples and tests that make it "easy to enjoy" learning process. For the seek of simplicity, the course is divided in different modules and at the end of each module a set of exercises and program codes are proposed for a best practice. The graphical user interface (GUI) which is constructed using an open source Opale Scenari offers a simple navigation through the different module. The course treats the complex systems that can be found in environment and their observables, we particularly highlight the extreme variability of these observables over a wide range of scales. Using the multifractal formalism through different applications (turbulence, precipitation, hydrology) we demonstrate how such extreme variability of the geophysical/biological fields should be used solving everyday (geo-)environmental chalenges.

  5. Optimization and Control of Agent-Based Models in Biology: A Perspective.

    PubMed

    An, G; Fitzpatrick, B G; Christley, S; Federico, P; Kanarek, A; Neilan, R Miller; Oremland, M; Salinas, R; Laubenbacher, R; Lenhart, S

    2017-01-01

    Agent-based models (ABMs) have become an increasingly important mode of inquiry for the life sciences. They are particularly valuable for systems that are not understood well enough to build an equation-based model. These advantages, however, are counterbalanced by the difficulty of analyzing and using ABMs, due to the lack of the type of mathematical tools available for more traditional models, which leaves simulation as the primary approach. As models become large, simulation becomes challenging. This paper proposes a novel approach to two mathematical aspects of ABMs, optimization and control, and it presents a few first steps outlining how one might carry out this approach. Rather than viewing the ABM as a model, it is to be viewed as a surrogate for the actual system. For a given optimization or control problem (which may change over time), the surrogate system is modeled instead, using data from the ABM and a modeling framework for which ready-made mathematical tools exist, such as differential equations, or for which control strategies can explored more easily. Once the optimization problem is solved for the model of the surrogate, it is then lifted to the surrogate and tested. The final step is to lift the optimization solution from the surrogate system to the actual system. This program is illustrated with published work, using two relatively simple ABMs as a demonstration, Sugarscape and a consumer-resource ABM. Specific techniques discussed include dimension reduction and approximation of an ABM by difference equations as well systems of PDEs, related to certain specific control objectives. This demonstration illustrates the very challenging mathematical problems that need to be solved before this approach can be realistically applied to complex and large ABMs, current and future. The paper outlines a research program to address them.

  6. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads.

    PubMed

    Lin, Hao-Ting

    2017-06-04

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  7. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads

    PubMed Central

    Lin, Hao-Ting

    2017-01-01

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally. PMID:28587220

  8. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  9. Maneuver simulations of flexible spacecraft by solving TPBVP

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Li, Feiyue

    1991-01-01

    The optimal control of large angle rapid maneuvers and vibrations of a Shuttle mast reflector system is considered. The nonlinear equations of motion are formulated by using Lagrange's formula, with the mast modeled as a continuous beam. The nonlinear terms in the equations come from the coupling between the angular velocities, the modal coordinates, and the modal rates. Pontryagin's Maximum Principle is applied to the slewing problem, to derive the necessary conditions for the optimal controls, which are bounded by given saturation levels. The resulting two point boundary value problem (TPBVP) is then solved by using the quasilinearization algorithm and the method of particular solutions. In the numerical simulations, the structural parameters and the control limits from the Spacecraft Control Lab Experiment (SCOLE) are used. In the 2-D case, only the motion in the plane of an Earth orbit or the single axis slewing motion is discussed. In the 3-D slewing, the mast is modeled as a continuous beam subjected to 3-D deformations. The numerical results for both the linearized system and the nonlinear system are presented to compare the differences in their time response.

  10. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  11. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  12. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  13. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  14. A Decision Support System for Solving Multiple Criteria Optimization Problems

    ERIC Educational Resources Information Center

    Filatovas, Ernestas; Kurasova, Olga

    2011-01-01

    In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…

  15. The Intelligent Control System and Experiments for an Unmanned Wave Glider.

    PubMed

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the "Ocean Rambler" UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified.

  16. The Intelligent Control System and Experiments for an Unmanned Wave Glider

    PubMed Central

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the “Ocean Rambler” UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified. PMID:28005956

  17. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    PubMed Central

    Saberi Nik, Hassan; Rebelo, Paulo

    2014-01-01

    We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624

  18. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  19. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  20. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  1. Improvement in Generic Problem-Solving Abilities of Students by Use of Tutor-less Problem-Based Learning in a Large Classroom Setting

    PubMed Central

    Klegeris, Andis; Bahniwal, Manpreet; Hurren, Heather

    2013-01-01

    Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such techniques over classical teaching styles. Previously, we demonstrated that introduction of tutor-less PBL in a large third-year biochemistry undergraduate class increased student satisfaction and attendance. The current study assessed the generic problem-solving abilities of students from the same class at the beginning and end of the term, and compared student scores with similar data obtained in three classes not using PBL. Two generic problem-solving tests of equal difficulty were administered such that students took different tests at the beginning and the end of the term. Blinded marking showed a statistically significant 13% increase in the test scores of the biochemistry students exposed to PBL, while no trend toward significant change in scores was observed in any of the control groups not using PBL. Our study is among the first to demonstrate that use of tutor-less PBL in a large classroom leads to statistically significant improvement in generic problem-solving skills of students. PMID:23463230

  2. Solving the Vlasov equation in two spatial dimensions with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Kopp, Michael; Vattis, Kyriakos; Skordis, Constantinos

    2017-12-01

    We demonstrate that the Vlasov equation describing collisionless self-gravitating matter may be solved with the so-called Schrödinger method (ScM). With the ScM, one solves the Schrödinger-Poisson system of equations for a complex wave function in d dimensions, rather than the Vlasov equation for a 2 d -dimensional phase space density. The ScM also allows calculating the d -dimensional cumulants directly through quasilocal manipulations of the wave function, avoiding the complexity of 2 d -dimensional phase space. We perform for the first time a quantitative comparison of the ScM and a conventional Vlasov solver in d =2 dimensions. Our numerical tests were carried out using two types of cold cosmological initial conditions: the classic collapse of a sine wave and those of a Gaussian random field as commonly used in cosmological cold dark matter N-body simulations. We compare the first three cumulants, that is, the density, velocity and velocity dispersion, to those obtained by solving the Vlasov equation using the publicly available code ColDICE. We find excellent qualitative and quantitative agreement between these codes, demonstrating the feasibility and advantages of the ScM as an alternative to N-body simulations. We discuss, the emergence of effective vorticity in the ScM through the winding number around the points where the wave function vanishes. As an application we evaluate the background pressure induced by the non-linearity of large scale structure formation, thereby estimating the magnitude of cosmological backreaction. We find that it is negligibly small and has time dependence and magnitude compatible with expectations from the effective field theory of large scale structure.

  3. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  5. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  6. A randomized controlled trial of a Dutch version of systems training for emotional predictability and problem solving for borderline personality disorder.

    PubMed

    Bos, Elisabeth H; van Wel, E Bas; Appelo, Martin T; Verbraak, Marc J P M

    2010-04-01

    Systems Training for Emotional Predictability and Problem Solving (STEPPS) is a group treatment for persons with borderline personality disorder (BPD) that is relatively easy to implement. We investigated the efficacy of a Dutch version of this treatment (VERS). Seventy-nine DSM-IV BPD patients were randomly assigned to STEPPS plus an adjunctive individual therapy, or to treatment as usual. Assessments took place before and after the intervention, and at a 6-month follow-up. STEPPS recipients showed a significantly greater reduction in general psychiatric and BPD-specific symptomatology than subjects assigned to treatment as usual; these differences remained significant at follow-up. STEPPS also led to greater improvement in quality of life, especially at follow-up. No differences in impulsive or parasuicidal behavior were observed. Effect sizes for the differences between the treatments were moderate to large. The results suggest that the brief STEPPS program combined with limited individual therapy can improve BPD-treatment in a number of ways.

  7. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  8. Advanced design for orbital debris removal in support of solar system exploration

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The development of an Autonomous Space Processor for Orbital Debris (ASPOD) is the ultimate goal. The craft will process, in situ, orbital debris using resources available in low Earth orbit (LEO). The serious problem of orbital debris is briefly described and the nature of the large debris population is outlined. This year, focus was on development of a versatile robotic manipulator to augment an existing robotic arm; incorporation of remote operation of robotic arms; and formulation of optimal (time and energy) trajectory planning algorithms for coordinating robotic arms. The mechanical design of the new arm is described in detail. The versatile work envelope is explained showing the flexibility of the new design. Several telemetry communication systems are described which will enable the remote operation of the robotic arms. The trajectory planning algorithms are fully developed for both the time-optimal and energy-optimal problem. The optimal problem is solved using phase plane techniques while the energy optimal problem is solved using dynamics programming.

  9. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    NASA Astrophysics Data System (ADS)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  10. A Lane-Level LBS System for Vehicle Network with High-Precision BDS/GPS Positioning

    PubMed Central

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665

  11. The numerical solution of the Helmholtz equation for wave propagation problems in underwater acoustics

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Goldstein, C. I.; Turkel, E.

    1984-01-01

    The Helmholtz Equation (-delta-K(2)n(2))u=0 with a variable index of refraction, n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. A numerical algorithm was developed and a computer code implemented that can effectively solve this equation in the intermediate frequency range. The equation is discretized using the finite element method, thus allowing for the modeling of complicated geometrices (including interfaces) and complicated boundary conditions. A global radiation boundary condition is imposed at the far field boundary that is exact for an arbitrary number of propagating modes. The resulting large, non-selfadjoint system of linear equations with indefinite symmetric part is solved using the preconditioned conjugate gradient method applied to the normal equations. A new preconditioner is developed based on the multigrid method. This preconditioner is vectorizable and is extremely effective over a wide range of frequencies provided the number of grid levels is reduced for large frequencies. A heuristic argument is given that indicates the superior convergence properties of this preconditioner.

  12. Nonequilibrium response of an electron-mediated charge density wave ordered material to a large dc electric field

    NASA Astrophysics Data System (ADS)

    Matveev, O. P.; Shvaika, A. M.; Devereaux, T. P.; Freericks, J. K.

    2016-01-01

    Using the Kadanoff-Baym-Keldysh formalism, we employ nonequilibrium dynamical mean-field theory to exactly solve for the nonlinear response of an electron-mediated charge-density-wave-ordered material. We examine both the dc current and the order parameter of the conduction electrons as the ordered system is driven by the electric field. Although the formalism we develop applies to all models, for concreteness, we examine the charge-density-wave phase of the Falicov-Kimball model, which displays a number of anomalous behaviors including the appearance of subgap density of states as the temperature increases. These subgap states should have a significant impact on transport properties, particularly the nonlinear response of the system to a large dc electric field.

  13. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  14. An algorithm for solving the perturbed gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Davis, Sanford

    1993-01-01

    The present application of a compact, higher-order central-difference approximation to the linearized Euler equations illustrates the multimodal character of these equations by means of computations for acoustic, vortical, and entropy waves. Such dissipationless central-difference methods are shown to propagate waves exhibiting excellent phase and amplitude resolution on the basis of relatively large time-steps; they can be applied to wave problems governed by systems of first-order partial differential equations.

  15. Tetraneutron: Rigorous continuum calculation

    NASA Astrophysics Data System (ADS)

    Deltuva, A.

    2018-07-01

    The four-neutron system is studied using exact continuum equations for transition operators and solving them in the momentum-space framework. A resonant behavior is found for strongly enhanced interaction but not a the physical strength, indicating the absence of an observable tetraneutron resonance, in contrast to a number of earlier works. As the transition operators acquire large values at low energies, it is conjectured that this behavior may explain peaks in many-body reactions even without a resonance.

  16. Review on solving the forward problem in EEG source analysis

    PubMed Central

    Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace

    2007-01-01

    Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144

  17. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  18. A distributed finite-element modeling and control approach for large flexible structures

    NASA Technical Reports Server (NTRS)

    Young, K. D.

    1989-01-01

    An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.

  19. Strongly enhanced thermal transport in a lightly doped Mott insulator at low temperature.

    PubMed

    Zlatić, V; Freericks, J K

    2012-12-28

    We show how a lightly doped Mott insulator has hugely enhanced electronic thermal transport at low temperature. It displays universal behavior independent of the interaction strength when the carriers can be treated as nondegenerate fermions and a nonuniversal "crossover" region where the Lorenz number grows to large values, while still maintaining a large thermoelectric figure of merit. The electron dynamics are described by the Falicov-Kimball model which is solved for arbitrary large on-site correlation with a dynamical mean-field theory algorithm on a Bethe lattice. We show how these results are generic for lightly doped Mott insulators as long as the renormalized Fermi liquid scale is pushed to very low temperature and the system is not magnetically ordered.

  20. Self-* properties through gossiping.

    PubMed

    Babaoglu, Ozalp; Jelasity, Márk

    2008-10-28

    As computer systems have become more complex, numerous competing approaches have been proposed for these systems to self-configure, self-manage, self-repair, etc. such that human intervention in their operation can be minimized. In ubiquitous systems, this has always been a central issue as well. In this paper, we overview techniques to implement self-* properties in large-scale, decentralized networks through bio-inspired techniques in general, and gossip-based algorithms in particular. We believe that gossip-based algorithms could be an important inspiration for solving problems in ubiquitous computing as well. As an example, we outline a novel approach to arrange large numbers of mobile agents (e.g. vehicles, rescue teams carrying mobile devices) into different formations in a totally decentralized manner. The approach is inspired by the biological mechanism of cell sorting via differential adhesion, as well as by our earlier work in self-organizing peer-to-peer overlay networks.

  1. Construction, testing and development of large wind energy facilities

    NASA Technical Reports Server (NTRS)

    Windheim, R. (Editor); Cuntze, R. (Editor)

    1982-01-01

    Building large rotor blades and control of oscillations in large facilities are discussed. It is concluded that the technical problems in the design of large rotor blades and control of oscillations can be solved.

  2. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  3. The research on new type fast burning systems for biogas engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, L.; Zheng, B.; Chen, Z.

    1996-12-31

    In order to meet the demands of energy supply and environmental protection, the large and medium-sized biogas engineering are developed quickly. The biogas engines are also beginning to be developed in China. However, the problems of afterburning and short lifespan of spark ignited biogas engine have not been solved. According to the fast burning theory in gas engines, the authors developed four kinds of new combustion systems which could promote the fast burning of mixture gas and gained good effects. This paper discusses in detail the structural features and experimental results of one combustion system: the Fan shaped combustion chamber.

  4. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  6. Design and realization of flash translation layer in tiny embedded system

    NASA Astrophysics Data System (ADS)

    Ren, Xiaoping; Sui, Chaoya; Luo, Zhenghua; Cao, Wenji

    2018-05-01

    We design a solution of tiny embedded device NAND Flash storage system on the basis of deeply studying the characteristics of widely used NAND Flash in the embedded devices in order to adapt to the development of intelligent interconnection trend and solve the storage problem of large data volume in tiny embedded system. The hierarchical structure and function purposes of the system are introduced. The design and realization of address mapping, error correction, bad block management, wear balance, garbage collection and other algorithms in flash memory transformation layer are described in details. NAND Flash drive and management are realized on STM32 micro-controller, thereby verifying design effectiveness and feasibility.

  7. Lunar-derived titanium alloys for hydrogen storage

    NASA Technical Reports Server (NTRS)

    Love, S.; Hertzberg, A.; Woodcock, G.

    1992-01-01

    Hydrogen gas, which plays an important role in many projected lunar power systems and industrial processes, can be stored in metallic titanium and in certain titanium alloys as an interstitial hydride compound. Storing and retrieving hydrogen with titanium-iron alloy requires substantially less energy investment than storage by liquefaction. Metal hydride storage systems can be designed to operate at a wide range of temperatures and pressures. A few such systems have been developed for terrestrial applications. A drawback of metal hydride storage for lunar applications is the system's large mass per mole of hydrogen stored, which rules out transporting it from earth. The transportation problem can be solved by using native lunar materials, which are rich in titanium and iron.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A

    Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less

  9. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  10. Gauging the Gaps in Student Problem-Solving Skills: Assessment of Individual and Group Use of Problem-Solving Strategies Using Online Discussions

    ERIC Educational Resources Information Center

    Anderson, William L.; Mitchell, Steven M.; Osgood, Marcy P.

    2008-01-01

    For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted…

  11. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  12. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  13. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES

    PubMed Central

    Wan, Xiaohai; Li, Zhilin

    2012-01-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size. PMID:22701346

  14. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES.

    PubMed

    Wan, Xiaohai; Li, Zhilin

    2012-06-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size.

  15. An optimization model for the US Air-Traffic System

    NASA Technical Reports Server (NTRS)

    Mulvey, J. M.

    1986-01-01

    A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.

  16. Model Order Reduction Algorithm for Estimating the Absorption Spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.

    The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less

  17. Systems metabolic engineering of microorganisms to achieve large-scale production of flavonoid scaffolds.

    PubMed

    Wu, Junjun; Du, Guocheng; Zhou, Jingwen; Chen, Jian

    2014-10-20

    Flavonoids possess pharmaceutical potential due to their health-promoting activities. The complex structures of these products make extraction from plants difficult, and chemical synthesis is limited because of the use of many toxic solvents. Microbial production offers an alternate way to produce these compounds on an industrial scale in a more economical and environment-friendly manner. However, at present microbial production has been achieved only on a laboratory scale and improvements and scale-up of these processes remain challenging. Naringenin and pinocembrin, which are flavonoid scaffolds and precursors for most of the flavonoids, are the model molecules that are key to solving the current issues restricting industrial production of these chemicals. The emergence of systems metabolic engineering, which combines systems biology with synthetic biology and evolutionary engineering at the systems level, offers new perspectives on strain and process optimization. In this review, current challenges in large-scale fermentation processes involving flavonoid scaffolds and the strategies and tools of systems metabolic engineering used to overcome these challenges are summarized. This will offer insights into overcoming the limitations and challenges of large-scale microbial production of these important pharmaceutical compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Object recognition based on Google's reverse image search and image similarity

    NASA Astrophysics Data System (ADS)

    Horváth, András.

    2015-12-01

    Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.

  19. A microscopic description of black hole evaporation via holography

    DOE PAGES

    Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan

    2016-07-19

    Here, we propose a description of how a large, cold black hole (black zero-brane) in type IIA superstring theory evaporates into freely propagating D0-branes, by solving the dual gauge theory quantitatively. The energy spectrum of emitted D0-branes is parametrically close to thermal when the black hole is large. The black hole, while initially cold, gradually becomes an extremely hot and stringy object as it evaporates. As it emits D0-branes, its emission rate speeds up and it evaporates completely without leaving any remnant. Hence this system provides us with a concrete holographic description of black hole evaporation without information loss.

  20. A microscopic description of black hole evaporation via holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan

    Here, we propose a description of how a large, cold black hole (black zero-brane) in type IIA superstring theory evaporates into freely propagating D0-branes, by solving the dual gauge theory quantitatively. The energy spectrum of emitted D0-branes is parametrically close to thermal when the black hole is large. The black hole, while initially cold, gradually becomes an extremely hot and stringy object as it evaporates. As it emits D0-branes, its emission rate speeds up and it evaporates completely without leaving any remnant. Hence this system provides us with a concrete holographic description of black hole evaporation without information loss.

  1. Parallelization of elliptic solver for solving 1D Boussinesq model

    NASA Astrophysics Data System (ADS)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  2. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    PubMed

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  3. Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning

    ERIC Educational Resources Information Center

    Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting

    2012-01-01

    A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…

  4. On Solving Systems of Equations by Successive Reduction Using 2×2 Matrices

    ERIC Educational Resources Information Center

    Carley, Holly

    2014-01-01

    Usually a student learns to solve a system of linear equations in two ways: "substitution" and "elimination." While the two methods will of course lead to the same answer they are considered different because the thinking process is different. In this paper the author solves a system in these two ways to demonstrate the…

  5. Study of multi-channel optical system based on the compound eye

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Fu, Yuegang; Liu, Zhiying; Dong, Zhengchao

    2014-09-01

    As an important part of machine vision, compound eye optical systems have the characteristics of high resolution and large FOV. By applying the compound eye optical systems to target detection and recognition, the contradiction between large FOV and high resolution in the traditional single aperture optical systems could be solved effectively and also the parallel processing ability of the optical systems could be sufficiently shown. In this paper, the imaging features of the compound eye optical systems are analyzed. After discussing the relationship between the FOV in each subsystem and the contact ratio of the FOV in the whole system, a method to define the FOV of the subsystem is presented. And a compound eye optical system is designed, which is based on the large FOV synthesized of multi-channels. The compound eye optical system consists with a central optical system and array subsystem, in which the array subsystem is used to capture the target. The high resolution image of the target could be achieved by the central optical system. With the advantage of small volume, light weight and rapid response speed, the optical system could detect the objects which are in 3km and FOV of 60°without any scanning device. The objects in the central field 2w=5.1°could be imaged with high resolution so that the objects could be recognized.

  6. CrasyDSE: A framework for solving Dyson–Schwinger equations☆

    PubMed Central

    Huber, Markus Q.; Mitter, Mario

    2012-01-01

    Dyson–Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson–Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang–Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang–Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson–Schwinger equations.1 Program summary Program title: CrasyDSE Catalogue identifier: AEMY _v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49030 No. of bytes in distributed program, including test data, etc.: 303958 Distribution format: tar.gz Programming language: Mathematica 8 and higher, C++. Computer: All on which Mathematica and C++ are available. Operating system: All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification: 11.1, 11.4, 11.5, 11.6. Nature of problem: Solve (large) systems of Dyson–Schwinger equations numerically. Solution method: Create C++ functions in Mathematica to be used for the numeric code in C++. This code uses structures to handle large numbers of Green functions. Unusual features: Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time: Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster. PMID:25540463

  7. CrasyDSE: A framework for solving Dyson-Schwinger equations.

    PubMed

    Huber, Markus Q; Mitter, Mario

    2012-11-01

    Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun , a program for the derivation of Dyson-Schwinger equations. Program title : CrasyDSE Catalogue identifier : AEMY _v1_0 Program summary URL : http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from : CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions : Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc. : 49030 No. of bytes in distributed program, including test data, etc. : 303958 Distribution format : tar.gz Programming language : Mathematica 8 and higher, C++ . Computer : All on which Mathematica and C++ are available. Operating system : All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification : 11.1, 11.4, 11.5, 11.6. Nature of problem : Solve (large) systems of Dyson-Schwinger equations numerically. Solution method : Create C++ functions in Mathematica to be used for the numeric code in C++ . This code uses structures to handle large numbers of Green functions. Unusual features : Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time : Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster.

  8. CrasyDSE: A framework for solving Dyson-Schwinger equations

    NASA Astrophysics Data System (ADS)

    Huber, Markus Q.; Mitter, Mario

    2012-11-01

    Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson-Schwinger equations.

  9. Performance verification of adaptive optics for satellite-to-ground coherent optical communications at large zenith angle.

    PubMed

    Chen, Mo; Liu, Chao; Rui, Daoman; Xian, Hao

    2018-02-19

    Although there is an urgent demand, it is still a tremendous challenge to use the coherent optical communication technology to the satellite-to-ground data transmission system especially at large zenith angle due to the influence of atmospheric turbulence. Adaptive optics (AO) is a considerable scheme to solve the problem. In this paper, we integrate the adaptive optics (AO) to the coherent laser communications and the performances of mixing efficiency as well as bit-error-rate (BER) at different zenith angles are studied. The analytical results show that the increasing of zenith angle can severely decrease the performances of the coherent detection, and increase the BER to higher than 10 -3 , which is unacceptable. The simulative results of coherent detection with AO compensation indicate that the larger mixing efficiency and lower BER can be performed by the coherent receiver with a high-mode AO compensation. The experiment of correcting the atmospheric turbulence wavefront distortion using a 249-element AO system at large zenith angles is carried out. The result demonstrates that the AO system has a significant improvement on satellite-to-ground coherent optical communication system at large zenith angle. It also indicates that the 249-element AO system can only meet the needs of coherent communication systems at zenith angle smaller than 65̊ for the 1.8m telescope under weak and moderate turbulence.

  10. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  11. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  12. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE PAGES

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-03-27

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  13. Development and Verification of a Novel Robot-Integrated Fringe Projection 3D Scanning System for Large-Scale Metrology.

    PubMed

    Du, Hui; Chen, Xiaobo; Xi, Juntong; Yu, Chengyi; Zhao, Bao

    2017-12-12

    Large-scale surfaces are prevalent in advanced manufacturing industries, and 3D profilometry of these surfaces plays a pivotal role for quality control. This paper proposes a novel and flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. A mathematical model is established for the global data fusion. Subsequently, a robust method is introduced for the establishment of the end coordinate system. As for hand-eye calibration, the calibration ball is observed by the scanner and the laser tracker simultaneously. With this data, the hand-eye relationship is solved, and then an algorithm is built to get the transformation matrix between the end coordinate system and the world coordinate system. A validation experiment is designed to verify the proposed algorithms. Firstly, a hand-eye calibration experiment is implemented and the computation of the transformation matrix is done. Then a car body rear is measured 22 times in order to verify the global data fusion algorithm. The 3D shape of the rear is reconstructed successfully. To evaluate the precision of the proposed method, a metric tool is built and the results are presented.

  14. Fast parametric relationships for the large-scale reservoir simulation of mixed CH4-CO2 gas hydrate systems

    NASA Astrophysics Data System (ADS)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-06-01

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.

  15. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  16. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  17. Use of graphics in the design office at the Military Aircraft Division of the British Aircraft Corporation

    NASA Technical Reports Server (NTRS)

    Coles, W. A.

    1975-01-01

    The CAD/CAM interactive computer graphics system was described; uses to which it has been put were shown, and current developments of the system were outlined. The system supports batch, time sharing, and fully interactive graphic processing. Engineers using the system may switch between these methods of data processing and problem solving to make the best use of the available resources. It is concluded that the introduction of on-line computing in the form of teletypes, storage tubes, and fully interactive graphics has resulted in large increases in productivity and reduced timescales in the geometric computing, numerical lofting and part programming areas, together with a greater utilization of the system in the technical departments.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garnett, Robert W.

    Quite a broad range of accelerators have been applied to solving many of the challenging problems related to homeland security and defense. These accelerator systems range from relatively small, simple, and compact, to large and complex, based on the specific application requirements. They have been used or proposed as sources of primary and secondary probe beams for applications such as radiography and to induce specific reactions that are key signatures for detecting conventional explosives or fissile material. A brief overview and description of these accelerator systems, their specifications, and application will be presented. Some recent technology trends will also bemore » discussed.« less

  19. Overview of Accelerators with Potential Use in Homeland Security

    NASA Astrophysics Data System (ADS)

    Garnett, Robert W.

    Quite a broad range of accelerators have been applied to solving many of the challenging problems related to homeland security and defense. These accelerator systems range from relatively small, simple, and compact, to large and complex, based on the specific application requirements. They have been used or proposed as sources of primary and secondary probe beams for applications such as radiography and to induce specific reactions that are key signatures for detecting conventional explosives or fissile material. A brief overview and description of these accelerator systems, their specifications, and application will be presented. Some recent technology trends will also be discussed.

  20. Expansion and improvements of the FORMA system for response and load analysis. Volume 1: Programming manual

    NASA Technical Reports Server (NTRS)

    Wohlen, R. L.

    1976-01-01

    Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.

  1. A computerized compensator design algorithm with launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1976-01-01

    This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.

  2. Noncommutative quantum mechanics

    NASA Astrophysics Data System (ADS)

    Gamboa, J.; Loewe, M.; Rojas, J. C.

    2001-09-01

    A general noncommutative quantum mechanical system in a central potential V=V(r) in two dimensions is considered. The spectrum is bounded from below and, for large values of the anticommutative parameter θ, we find an explicit expression for the eigenvalues. In fact, any quantum mechanical system with these characteristics is equivalent to a commutative one in such a way that the interaction V(r) is replaced by V=V(HHO,Lz), where HHO is the Hamiltonian of the two-dimensional harmonic oscillator and Lz is the z component of the angular momentum. For other finite values of θ the model can be solved by using perturbation theory.

  3. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  4. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  5. Heat Pipes

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Heat Pipes were originally developed by NASA and the Los Alamos Scientific Laboratory during the 1960s to dissipate excessive heat build- up in critical areas of spacecraft and maintain even temperatures of satellites. Heat pipes are tubular devices where a working fluid alternately evaporates and condenses, transferring heat from one region of the tube to another. KONA Corporation refined and applied the same technology to solve complex heating requirements of hot runner systems in injection molds. KONA Hot Runner Systems are used throughout the plastics industry for products ranging in size from tiny medical devices to large single cavity automobile bumpers and instrument panels.

  6. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  7. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  8. A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems

    DTIC Science & Technology

    1990-11-01

    Intelligence Systems," in Distributed Artifcial Intelligence , vol. II, L. Gasser and M. Huhns (eds), Pitman, London, 1989, pp. 413-430. Shaw, M. Harrow, B...IDTIC FILE COPY A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems N Michael I. Shaw...SUBTITLE 5. FUNDING NUMBERS A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems 6

  9. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  10. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  11. Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram

    2017-03-01

    The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.

  12. Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory

    NASA Astrophysics Data System (ADS)

    Usselman, Austin

    We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.

  13. Reduze - Feynman integral reduction in C++

    NASA Astrophysics Data System (ADS)

    Studerus, C.

    2010-07-01

    Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.

  14. A conservative MHD scheme on unstructured Lagrangian grids for Z-pinch hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Wu, Fuyuan; Ramis, Rafael; Li, Zhenghong

    2018-03-01

    A new algorithm to model resistive magnetohydrodynamics (MHD) in Z-pinches has been developed. Two-dimensional axisymmetric geometry with azimuthal magnetic field Bθ is considered. Discretization is carried out using unstructured meshes made up of arbitrarily connected polygons. The algorithm is fully conservative for mass, momentum, and energy. Matter energy and magnetic energy are managed separately. The diffusion of magnetic field is solved using a derivative of the Symmetric-Semi-Implicit scheme, Livne et al. (1985) [23], where unconditional stability is obtained without needing to solve large sparse systems of equations. This MHD package has been integrated into the radiation-hydrodynamics code MULTI-2D, Ramis et al. (2009) [20], that includes hydrodynamics, laser energy deposition, heat conduction, and radiation transport. This setup allows to simulate Z-pinch configurations relevant for Inertial Confinement Fusion.

  15. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  16. Extended linear detection range for optical tweezers using image-plane detection scheme

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.

    2014-10-01

    Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.

  17. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  18. Implementation of parallel moment equations in NIMROD

    NASA Astrophysics Data System (ADS)

    Lee, Hankyu Q.; Held, Eric D.; Ji, Jeong-Young

    2017-10-01

    As collisionality is low (the Knudsen number is large) in many plasma applications, kinetic effects become important, particularly in parallel dynamics for magnetized plasmas. Fluid models can capture some kinetic effects when integral parallel closures are adopted. The adiabatic and linear approximations are used in solving general moment equations to obtain the integral closures. In this work, we present an effort to incorporate non-adiabatic (time-dependent) and nonlinear effects into parallel closures. Instead of analytically solving the approximate moment system, we implement exact parallel moment equations in the NIMROD fluid code. The moment code is expected to provide a natural convergence scheme by increasing the number of moments. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  19. Adding intelligence to scientific data management

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Treinish, Lloyd A.

    1989-01-01

    NASA plans to solve some of the problems of handling large-scale scientific data bases by turning to artificial intelligence (AI) are discussed. The growth of the information glut and the ways that AI can help alleviate the resulting problems are reviewed. The employment of the Intelligent User Interface prototype, where the user will generate his own natural language query with the assistance of the system, is examined. Spatial data management, scientific data visualization, and data fusion are discussed.

  20. A Blast of Cool Air

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Unable to solve their engineering problem with a rotor in their Orbital Vane product, DynEco Corporation turned to Kennedy Space Center for help. KSC engineers determined that the compressor rotor was causing a large concentration of stress, which led to cracking and instant rotor failure. NASA redesigned the lubrication system, which allowed the company to move forward with its compressor that has no rubbing parts. The Orbital Vane is a refrigerant compressor suitable for mobile air conditioning and refrigeration.

  1. The Next Frontier in Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrao, John

    2016-11-16

    Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

  2. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  3. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  4. New Method for Solving Inductive Electric Fields in the Ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  5. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  6. An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH

    NASA Astrophysics Data System (ADS)

    Lee, D.; Gopal, S.; Mohapatra, P.

    2012-07-01

    We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.

  7. Linear CCD attitude measurement system based on the identification of the auxiliary array CCD

    NASA Astrophysics Data System (ADS)

    Hu, Yinghui; Yuan, Feng; Li, Kai; Wang, Yan

    2015-10-01

    Object to the high precision flying target attitude measurement issues of a large space and large field of view, comparing existing measurement methods, the idea is proposed of using two array CCD to assist in identifying the three linear CCD with multi-cooperative target attitude measurement system, and to address the existing nonlinear system errors and calibration parameters and more problems with nine linear CCD spectroscopic test system of too complicated constraints among camera position caused by excessive. The mathematical model of binocular vision and three linear CCD test system are established, co-spot composition triangle utilize three red LED position light, three points' coordinates are given in advance by Cooperate Measuring Machine, the red LED in the composition of the three sides of a triangle adds three blue LED light points as an auxiliary, so that array CCD is easier to identify three red LED light points, and linear CCD camera is installed of a red filter to filter out the blue LED light points while reducing stray light. Using array CCD to measure the spot, identifying and calculating the spatial coordinates solutions of red LED light points, while utilizing linear CCD to measure three red LED spot for solving linear CCD test system, which can be drawn from 27 solution. Measured with array CCD coordinates auxiliary linear CCD has achieved spot identification, and has solved the difficult problems of multi-objective linear CCD identification. Unique combination of linear CCD imaging features, linear CCD special cylindrical lens system is developed using telecentric optical design, the energy center of the spot position in the depth range of convergence in the direction is perpendicular to the optical axis of the small changes ensuring highprecision image quality, and the entire test system improves spatial object attitude measurement speed and precision.

  8. ELECTRONIC DIGITAL COMPUTER

    DOEpatents

    Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

    1957-10-01

    The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

  9. An electromagnetism-like metaheuristic for open-shop problems with no buffer

    NASA Astrophysics Data System (ADS)

    Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi

    2012-12-01

    This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.

  10. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  11. Design concepts for the development of cooperative problem-solving systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, Elaine; Layton, Chuck; Bihari, Tom

    1992-01-01

    There are many problem-solving tasks that are too complex to fully automate given the current state of technology. Nevertheless, significant improvements in overall system performance could result from the introduction of well-designed computer aids. We have been studying the development of cognitive tools for one such problem-solving task, enroute flight path planning for commercial airlines. Our goal was two-fold. First, we were developing specific systems designs to help with this important practical problem. Second, we are using this context to explore general design concepts to guide in the development of cooperative problem-solving systems. These designs concepts are described.

  12. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  13. Possibilities of the free-complement methodology for solving the Schrödinger equation of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Hiroshi

    Chemistry is a science of complex subjects that occupy this universe and biological world and that are composed of atoms and molecules. Its essence is diversity. However, surprisingly, whole of this science is governed by simple quantum principles like the Schrödinger and the Dirac equations. Therefore, if we can find a useful general method of solving these quantum principles under the fermionic and/or bosonic constraints accurately in a reasonable speed, we can replace somewhat empirical methodologies of this science with purely quantum theoretical and computational logics. This is the purpose of our series of studies - called ``exact theory'' in our laboratory. Some of our documents are cited below. The key idea was expressed as the free complement (FC) theory (originally called ICI theory) that was introduced to solve the Schrödinger and Dirac equations analytically. For extending this methodology to larger systems, order N methodologies are essential, but actually the antisymmetry constraints for electronic wave functions become big constraints. Recently, we have shown that the antisymmetry rule or `dogma' can be very much relaxed when our subjects are large molecular systems. In this talk, I want to present our recent progress in our FC methodology. The purpose is to construct ``predictive quantum chemistry'' that is useful in chemical and physical researches and developments in institutes and industries

  14. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  15. A nanocompartment system (Synthosome) designed for biotechnological applications.

    PubMed

    Nallani, Madhavan; Benito, Samantha; Onaca, Ozana; Graff, Alexandra; Lindemann, Marcus; Winterhalter, Mathias; Meier, Wolfgang; Schwaneberg, Ulrich

    2006-05-03

    A nanocompartment system based on two deletion mutants of the large channel protein FhuA (FhuA Delta1-129; FhuA Delta1-160) and an ABA triblock copolymer (PMOXA-PDMS-PMOXA) has been developed for putative biotechnological applications. FhuA is ideally suited for applications in biotechnology due to its monomeric structure, large pore diameter (39-46 A elliptical cross-section) that ensures rapid compound flux, and solved crystallographic structure. Two areas of application were targeted as proof of principle: (A) selective product recovery in nanocompartments and (B) enzymatic conversion in nanocompartments. Selective recovery of negatively charged compounds has been achieved on the example of sulforhodamine B by using positively charged polylysine molecules as trap inside the nanocompartment. Conversion in nanocompartments has been achieved by 3,3',5,5'-tetramethylbenzidine oxidation employing horseradish peroxidase (HRP).

  16. Gyrodampers for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1979-01-01

    The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.

  17. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  18. A hypertext system that learns from user feedback

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1994-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.

  19. Teleoperation with large time delay using a prevision system

    NASA Astrophysics Data System (ADS)

    Bergamasco, Massimo; De Paolis, Lucio; Ciancio, Stefano; Pinna, Sebastiano

    1997-12-01

    In teleoperation technology various techniques have been proposed in order to alleviate the effects of time delayed communication and to avoid the instability of the system. This paper describes a different approach to robotic teleoperation with large-time delay and a teleoperation system, based on teleprogramming paradigm, has been developed with the intent to improve the slave autonomy and to decrease the amount of information exchanged between master and slave system. The goal concept, specific of AI, has been used. In order to minimize the total task completion time has been introduced a prevision system, called Merlino, able to know in advance the slave's choices taking into account both the operator's actions and the information about the remote environment. The prevision system allows, in case of environment changes, to understand if the slave can solve the goal. Otherwise, Merlino is able to signal a 'fail situation.' Some experiments have been carried out by means of an advanced human-machine interface with force feedback, designed at PERCRO Laboratory of Scuola Superiore S. Anna, which gives a better sensation of presence in the remote environment.

  20. Seismic isolation device having charging function by a transducer

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takashi; Miura, Nanako; Takahashi, Masaki

    2016-04-01

    In late years, many base isolated structures are planned as the seismic design, because they suppress vibration response significantly against large earthquake. To achieve greater safety, semi-active or active vibration control system is installed in the structures as earthquake countermeasures. Semi-active and active vibration control systems are more effective than passive vibration control system to large earthquake in terms of vibration reduction. However semi-active and active vibration control system cannot operate as required when external power supply is cut off. To solve the problem of energy consumption, we propose a self-powered active seismic isolation floor which achieve active control system using regenerated vibration energy. This device doesn't require external energy to produce control force. The purpose of this study is to propose the seismic isolation device having charging function and to optimize the control system and passive elements such as spring coefficients and damping coefficients using genetic algorithm. As a result, optimized model shows better performance in terms of vibration reduction and electric power regeneration than the previous model. At the end of this paper, the experimental specimen of the proposed isolation device is shown.

  1. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  2. High-resolution inverse synthetic aperture radar imaging for large rotation angle targets based on segmented processing algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan

    2017-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.

  3. The dynamics and control of large flexible space structures - 13

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Li, Feiyue; Xu, Jianke

    1990-01-01

    The optimal control of three-dimensional large angle maneuvers and vibrations of a Shuttle-mast-reflector system is considered. The nonlinear equations of motion are formulated by using Lagrange's formula, with the mast modeled as a continuous beam subject to three-dimensional deformations. Pontryagin's Maximum Principle is applied to the slewing problem, to derive the necessary conditions for the optimal controls, which are bounded by given saturation levels. The resulting two point boundary value problem is then solved by using the quasilinearization algorithm and the method of particular solutions. The study of the large angle maneuvering of the Shuttle-beam-reflector spacecraft in the plane of a circular earth orbit is extended to consider the effects of the structural offset connection, the axial shortening, and the gravitational torque on the slewing motion. Finally the effect of additional design parameters (such as related to additional payload requirement) on the linear quadratic regulator based design of an orbiting control/structural system is examined.

  4. Large Capacity SMES for Voltage Dip Compensation

    NASA Astrophysics Data System (ADS)

    Iwatani, Yu; Saito, Fusao; Ito, Toshinobu; Shimada, Mamoru; Ishida, Satoshi; Shimanuki, Yoshio

    Voltage dips of power grids due to thunderbolts, snow damage, and so on, cause serious damage to production lines of precision instruments, for example, semiconductors. In recent years, in order to solve this problem, uninterruptible power supply systems (UPS) are used. UPS, however, has small capacity, so a great number of UPS are needed in large factories. Therefore, we have manufactured the superconducting magnetic energy storage (SMES) system for voltage dip compensation able to protect loads with large capacity collectively. SMES has advantages such as space conservation, long lifetime and others. In field tests, cooperating with CHUBU Electric Power Co., Inc. we proved that SMES is valuable for compensating voltage dips. Since 2007, 10MVA SMES improved from field test machines has been running in a domestic liquid crystal display plant, and in 2008, it protected plant loads from a number of voltage dips. In this paper, we report the action principle and components of the improved SMES for voltage dip compensation, and examples of waveforms when 10MVA SMES compensated voltage dips.

  5. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  6. Studies of Verbal Problem Solving. 1. Two Performance-Aiding Programs

    DTIC Science & Technology

    1977-09-01

    5. Effective Date of Contract 6. Contract Expiration Date 7. Amount of Contract 8. Contract Number 9. Principal Investigator 10 . Scientific...7 10 3. A Mystery Solved by Propositional Calculus 16 1v I. INTRODUCTION Certain intellectual tasks, such as estimating and combining prob...aids or without any special training, they get the right answer within 15 minutes or so (about 80% of one large psychology class solved it). A few

  7. Computed torque control of a free-flying cooperat ing-arm robot

    NASA Technical Reports Server (NTRS)

    Koningstein, Ross; Ullman, Marc; Cannon, Robert H., Jr.

    1989-01-01

    The unified approach to solving free-floating space robot manipulator end-point control problems is presented using a control formulation based on an extension of computed torque. Once the desired end-point accelerations have been specified, the kinematic equations are used with momentum conservation equations to solve for the joint accelerations in any of the robot's possible configurations: fixed base or free-flying with open/closed chain grasp. The joint accelerations can then be used to calculate the arm control torques and internal forces using a recursive order N algorithm. Initial experimental verification of these techniques has been performed using a laboratory model of a two-armed space robot. This fully autonomous spacecraft system experiences the drag-free, zero G characteristics of space in two dimensions through the use of an air cushion support system. Results of these initial experiments are included which validate the correctness of the proposed methodology. The further problem of control in the large where not only the manipulator tip positions but the entire system consisting of base and arms must be controlled is also presented. The availability of a physical testbed has brought a keener insight into the subtleties of the problem at hand.

  8. A Python Program for Solving Schro¨dinger's Equation in Undergraduate Physical Chemistry

    ERIC Educational Resources Information Center

    Srnec, Matthew N.; Upadhyay, Shiv; Madura, Jeffry D.

    2017-01-01

    In undergraduate physical chemistry, Schrödinger's equation is solved for a variety of cases. In doing so, the energies and wave functions of the system can be interpreted to provide connections with the physical system being studied. Solving this equation by hand for a one-dimensional system is a manageable task, but it becomes time-consuming…

  9. A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving

    PubMed Central

    Crowley, Rebecca S.; Medvedeva, Olga

    2003-01-01

    We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159

  10. Student Modeling Based on Problem Solving Times

    ERIC Educational Resources Information Center

    Pelánek, Radek; Jarušek, Petr

    2015-01-01

    Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…

  11. Power oscillation suppression by robust SMES in power system with large wind power penetration

    NASA Astrophysics Data System (ADS)

    Ngamroo, Issarachai; Cuk Supriyadi, A. N.; Dechanupaprittha, Sanchai; Mitani, Yasunori

    2009-01-01

    The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions.

  12. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  13. Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793

  14. On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics

    PubMed Central

    Will, Clifford M.

    2011-01-01

    The post-Newtonian approximation is a method for solving Einstein’s field equations for physical systems in which motions are slow compared to the speed of light and where gravitational fields are weak. Yet it has proven to be remarkably effective in describing certain strong-field, fast-motion systems, including binary pulsars containing dense neutron stars and binary black hole systems inspiraling toward a final merger. The reasons for this effectiveness are largely unknown. When carried to high orders in the post-Newtonian sequence, predictions for the gravitational-wave signal from inspiraling compact binaries will play a key role in gravitational-wave detection by laser-interferometric observatories. PMID:21447714

  15. Visualization of the influence of the air conditioning system to the high-power laser beam quality with the modulation coherent imaging method.

    PubMed

    Tao, Hua; Veetil, Suhas P; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2015-08-01

    Air conditioning systems can lead to dynamic phase change in the laser beams of high-power laser facilities for the inertial confinement fusion, and this kind of phase change cannot be measured by most of the commonly employed Hartmann wavefront sensor or interferometry due to some uncontrollable factors, such as too large laser beam diameters and the limited space of the facility. It is demonstrated that this problem can be solved using a scheme based on modulation coherent imaging, and thus the influence of the air conditioning system on the performance of the high-power facility can be evaluated directly.

  16. Sustainer electric propulsion system application for spacecraft attitude control

    NASA Astrophysics Data System (ADS)

    Obukhov, V. A.; Pokryshkin, A. I.; Popov, G. A.; Yashina, N. V.

    2010-07-01

    Application of electric propulsion system (EPS) requires spacecraft (SC) equipping with large solar panels (SP) for the power supply to electric propulsions. This makes the problem of EPS-equipped SC control at the insertion stage more difficult to solve than in the case of SC equipped with chemical engines, because in addition to the SC attitude control associated with the mission there appears necessity in keeping SP orientation to Sun that is necessary for generation of electric power sufficient for the operation of service systems, purpose-oriented equipment, and EPS. The theoretical study of the control problem is the most interesting for a non-coplanar transfer from high elliptic orbit (HEO) to geostationary orbit (GSO).

  17. Determination of optimum allocation and pricing of distributed generation using genetic algorithm methodology

    NASA Astrophysics Data System (ADS)

    Mwakabuta, Ndaga Stanslaus

    Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.

  18. Design of sewage treatment system by applying fuzzy adaptive PID controller

    NASA Astrophysics Data System (ADS)

    Jin, Liang-Ping; Li, Hong-Chan

    2013-03-01

    In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.

  19. A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs

    NASA Astrophysics Data System (ADS)

    Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng

    2011-12-01

    With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.

  20. New approach to study mobility in the vicinity of dynamical arrest; exact application to a kinetically constrained model

    NASA Astrophysics Data System (ADS)

    DeGregorio, P.; Lawlor, A.; Dawson, K. A.

    2006-04-01

    We introduce a new method to describe systems in the vicinity of dynamical arrest. This involves a map that transforms mobile systems at one length scale to mobile systems at a longer length. This map is capable of capturing the singular behavior accrued across very large length scales, and provides a direct route to the dynamical correlation length and other related quantities. The ideas are immediately applicable in two spatial dimensions, and have been applied to a modified Kob-Andersen type model. For such systems the map may be derived in an exact form, and readily solved numerically. We obtain the asymptotic behavior across the whole physical domain of interest in dynamical arrest.

  1. Enroute flight planning: Evaluating design concepts for the development of cooperative problem-solving systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.

    1995-01-01

    There are many problem-solving tasks that are too complex to fully automate given the current state of technology. Nevertheless, significant improvements in overall system performance could result from the introduction of well-designed computer aids. We have been studying the development of cognitive tools for one such problem-solving task, enroute flight path planning for commercial airlines. Our goal has been two-fold. First, we have been developing specific system designs to help with this important practical problem. Second, we have been using this context to explore general design concepts to guide in the development of cooperative problem-solving systems. These design concepts are described below, along with illustrations of their application.

  2. An evaluation of a geographic information system software and its utility in promoting the use of integrated process skills in secondary students

    NASA Astrophysics Data System (ADS)

    Abbott, Thomas Diamond

    2001-07-01

    As technology continues to become an integral part of our educational system, research that clarifies how various technologies affect learning should be available to educators prior to the large scale introduction of any new technology into the classroom. This study will assess the degree to which a relatively new Geographic Information System Software (ArcView 3.1), when utilized by high school freshman in earth science and geography courses, can be used to (a) promote and develop integrated process skills in these students, and (b) improve their awareness and appraisal of their problem solving abilities. Two research questions will be addressed in this research: (1) Will the use of a GIS to solve problems with authentic contexts enhance the learning and refinement of integrated process skills over more conventional means of classroom instruction? and (2) Will students' perceptions of competence to solve problems within authentic contexts be greater for those who learned to use and implement a GIS when compared to those who have learned by more conventional means of classroom instruction? Research Question 1 will be assessed by using the Test of Integrated Process Skills II (or TIPS II) and Research Question 2 will be addressed by using the Problem Solving Inventory (PSI). The research will last thirteen weeks. The TIPS II and the PSI will be administered after the intervention of GIS to the experimental group, at which point an Analysis of Covariance and the Mann-Whitney U-test will be utilized to measure the affects of intervention by the independent variable. Teacher/researcher journals and teacher/student questionnaires will be used to compliment the statistical analysis. It is hoped that this study will help in the creation of future instructional models that enable educators to utilize modern technologies appropriately in their classrooms.

  3. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  4. Reduced description of reactive flows with tabulation of chemistry

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Goldin, Graham M.; Hiremath, Varun; Pope, Stephen B.

    2011-12-01

    The direct use of large chemical mechanisms in multi-dimensional Computational Fluid Dynamics (CFD) is computationally expensive due to the large number of chemical species and the wide range of chemical time scales involved. To meet this challenge, a reduced description of reactive flows in combination with chemistry tabulation is proposed to effectively reduce the computational cost. In the reduced description, the species are partitioned into represented species and unrepresented species; the reactive system is described in terms of a smaller number of represented species instead of the full set of chemical species in the mechanism; and the evolution equations are solved only for the represented species. When required, the unrepresented species are reconstructed assuming that they are in constrained chemical equilibrium. In situ adaptive tabulation (ISAT) is employed to speed the chemistry calculation through tabulating information of the reduced system. The proposed dimension-reduction / tabulation methodology determines and tabulates in situ the necessary information of the nr-dimensional reduced system based on the ns-species detailed mechanism. Compared to the full description with ISAT, the reduced descriptions achieve additional computational speed-up by solving fewer transport equations and faster ISAT retrieving. The approach is validated in both a methane/air premixed flame and a methane/air non-premixed flame. With the GRI 1.2 mechanism consisting of 31 species, the reduced descriptions (with 12 to 16 represented species) achieve a speed-up factor of up to three compared to the full description with ISAT, with a relatively moderate decrease in accuracy compared to the full description.

  5. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  6. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  7. u-w formulation for dynamic problems in large deformation regime solved through an implicit meshfree scheme

    NASA Astrophysics Data System (ADS)

    Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.

    2017-12-01

    Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.

  8. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  9. Closed-form Static Analysis with Inertia Relief and Displacement-Dependent Loads Using a MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1995-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.

  10. Fast optimal wavefront reconstruction for multi-conjugate adaptive optics using the Fourier domain preconditioned conjugate gradient algorithm.

    PubMed

    Vogel, Curtis R; Yang, Qiang

    2006-08-21

    We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.

  11. Design automation of load-bearing arched structures of roofs of tall buildings

    NASA Astrophysics Data System (ADS)

    Kulikov, Vladimir

    2018-03-01

    The article considers aspects of the possible use of arched roofs in the construction of skyscrapers. Tall buildings experience large load from various environmental factors. Skyscrapers are subject to various and complex types of deformation of its structural elements. The paper discusses issues related to the aerodynamics of various structural elements of tall buildings. The technique of solving systems of equations state method of Simpson. The article describes the optimization of geometric parameters of bearing elements of the arched roofs of skyscrapers.

  12. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  13. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  14. Newton-like methods for Navier-Stokes solution

    NASA Astrophysics Data System (ADS)

    Qin, N.; Xu, X.; Richards, B. E.

    1992-12-01

    The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.

  15. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  16. The Next Frontier in Computing

    ScienceCinema

    Sarrao, John

    2018-06-13

    Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

  17. Advanced Artificial Intelligence Technology Testbed

    NASA Technical Reports Server (NTRS)

    Anken, Craig S.

    1993-01-01

    The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.

  18. Solving the BM Camelopardalis puzzle

    NASA Technical Reports Server (NTRS)

    Teke, Mathias; Busby, Michael R.; Hall, Douglas S.

    1989-01-01

    BM Camelopardalis (=12 Cam) is a chromospherically active binary star with a relatively large orbital eccentricity. Systems with large eccentricities usually rotate pseudosynchronously. However, BM Cam has been a puzzle since its observed rotation rate is virtually equal to its orbital period indicating synchronization. All available photometry data for BM Cam have been collected and analyzed. Two models of modulated ellipticity effect are proposed, one based on equilibrium tidal deformation of the primary star and the other on a dynamical tidal effect. When the starspot variability is removed from the data, the dynamical tidal model was the better approximation to the real physical situation. The analysis indicates that BM Cam is not rotating pseudosynchronously but rotating in virtual synchronism after all.

  19. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  20. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  1. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  2. Advanced Computational Methods for Security Constrained Financial Transmission Rights: Structure and Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed tomore » solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.« less

  3. Color-suppression of non-planar diagrams in bosonic bound states

    NASA Astrophysics Data System (ADS)

    Alvarenga Nogueira, J. H.; Ji, Chueng-Ryong; Ydrefors, E.; Frederico, T.

    2018-02-01

    We study the suppression of non-planar diagrams in a scalar QCD model of a meson system in 3 + 1 space-time dimensions due to the inclusion of the color degrees of freedom. As a prototype of the color-singlet meson, we consider a flavor-nonsinglet system consisting of a scalar-quark and a scalar-antiquark with equal masses exchanging a scalar-gluon of a different mass, which is investigated within the framework of the homogeneous Bethe-Salpeter equation. The equation is solved by using the Nakanishi representation for the manifestly covariant bound-state amplitude and its light-front projection. The resulting non-singular integral equation is solved numerically. The damping of the impact of the cross-ladder kernel on the binding energies are studied in detail. The color-suppression of the cross-ladder effects on the light-front wave function and the elastic electromagnetic form factor are also discussed. As our results show, the suppression appears significantly large for Nc = 3, which supports the use of rainbow-ladder truncations in practical non-perturbative calculations within QCD.

  4. Integrated optimization of location assignment and sequencing in multi-shuttle automated storage and retrieval systems under modified 2n-command cycle pattern

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin

    2017-09-01

    This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.

  5. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems

    NASA Astrophysics Data System (ADS)

    Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming

    2018-06-01

    Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.

  6. Problems of Complex Systems: A Model of System Problem Solving Applied to Schools.

    ERIC Educational Resources Information Center

    Cooke, Robert A.; Rousseau, Denise M.

    Research of 25 Michigan elementary and secondary public schools is used to test a model relating organizations' problem-solving adequacy to their available inputs or resources and to the appropriateness of their structures. Problems that all organizations must solve, to avoid disorganization or entropy, include (1) getting inputs and producing…

  7. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  8. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  9. Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations

    NASA Astrophysics Data System (ADS)

    Darmawan, J. B. B.; Mungkasi, S.

    2017-01-01

    In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.

  10. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  11. Metallization failures

    NASA Technical Reports Server (NTRS)

    Beatty, R.

    1971-01-01

    Metallization-related failure mechanisms were shown to be a major cause of integrated circuit failures under accelerated stress conditions, as well as in actual use under field operation. The integrated circuit industry is aware of the problem and is attempting to solve it in one of two ways: (1) better understanding of the aluminum system, which is the most widely used metallization material for silicon integrated circuits both as a single level and multilevel metallization, or (2) evaluating alternative metal systems. Aluminum metallization offers many advantages, but also has limitations particularly at elevated temperatures and high current densities. As an alternative, multilayer systems of the general form, silicon device-metal-inorganic insulator-metal, are being considered to produce large scale integrated arrays. The merits and restrictions of metallization systems in current usage and systems under development are defined.

  12. Fixed Point Learning Based Intelligent Traffic Control System

    NASA Astrophysics Data System (ADS)

    Zongyao, Wang; Cong, Sui; Cheng, Shao

    2017-10-01

    Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.

  13. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  14. Inductive reasoning and implicit memory: evidence from intact and impaired memory systems.

    PubMed

    Girelli, Luisa; Semenza, Carlo; Delazer, Margarete

    2004-01-01

    In this study, we modified a classic problem solving task, number series completion, in order to explore the contribution of implicit memory to inductive reasoning. Participants were required to complete number series sharing the same underlying algorithm (e.g., +2), differing in both constituent elements (e.g., 2468 versus 57911) and correct answers (e.g., 10 versus 13). In Experiment 1, reliable priming effects emerged, whether primes and targets were separated by four or ten fillers. Experiment 2 provided direct evidence that the observed facilitation arises at central stages of problem solving, namely the identification of the algorithm and its subsequent extrapolation. The observation of analogous priming effects in a severely amnesic patient strongly supports the hypothesis that the facilitation in number series completion was largely determined by implicit memory processes. These findings demonstrate that the influence of implicit processes extends to higher level cognitive domain such as induction reasoning.

  15. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  16. Fast Legendre moment computation for template matching

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Normalized cross correlation (NCC) based template matching is insensitive to intensity changes and it has many applications in image processing, object detection, video tracking and pattern recognition. However, normalized cross correlation implementation is computationally expensive since it involves both correlation computation and normalization implementation. In this paper, we propose Legendre moment approach for fast normalized cross correlation implementation and show that the computational cost of this proposed approach is independent of template mask sizes which is significantly faster than traditional mask size dependent approaches, especially for large mask templates. Legendre polynomials have been widely used in solving Laplace equation in electrodynamics in spherical coordinate systems, and solving Schrodinger equation in quantum mechanics. In this paper, we extend Legendre polynomials from physics to computer vision and pattern recognition fields, and demonstrate that Legendre polynomials can help to reduce the computational cost of NCC based template matching significantly.

  17. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  18. Dynamics and Stability of Rolling Viscoelastic Tires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Trevor

    2013-04-30

    Current steady state rolling tire calculations often do not include treads because treads destroy the rotational symmetry of the tire. We describe two methodologies to compute time periodic solutions of a two-dimensional viscoelastic tire with treads: solving a minimization problem and solving a system of equations. We also expand on work by Oden and Lin on free spinning rolling elastic tires in which they disovered a hierachy of N-peak steady state standing wave solutions. In addition to discovering a two-dimensional hierarchy of standing wave solutions that includes their N-peak hiearchy, we consider the eects of viscoelasticity on the standing wavemore » solutions. Finally, a commonplace model of viscoelasticity used in our numerical experiments led to non-physical elastic energy growth for large tire speeds. We show that a viscoelastic model of Govindjee and Reese remedies the problem.« less

  19. Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator

    PubMed Central

    Omar, Mohamed A.

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732

  20. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    PubMed

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

Top